Test Report: Docker_Linux_crio_arm64 22049

                    
                      b350bc6d66813cad84bbff620e1b65ef38f64c38:2025-12-06:42657
                    
                

Test fail (40/364)

Order failed test Duration
38 TestAddons/serial/Volcano 0.39
44 TestAddons/parallel/Registry 16.69
45 TestAddons/parallel/RegistryCreds 0.49
46 TestAddons/parallel/Ingress 142.71
47 TestAddons/parallel/InspektorGadget 5.27
48 TestAddons/parallel/MetricsServer 6.37
50 TestAddons/parallel/CSI 29.48
51 TestAddons/parallel/Headlamp 4.14
52 TestAddons/parallel/CloudSpanner 5.36
53 TestAddons/parallel/LocalPath 9.56
54 TestAddons/parallel/NvidiaDevicePlugin 6.26
55 TestAddons/parallel/Yakd 6.28
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 502.28
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 369.32
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.46
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.67
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.52
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 734.84
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.17
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.05
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.81
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.11
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.6
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.71
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 2.94
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.11
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.32
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.33
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.34
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.31
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.39
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.18
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 106.17
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.42
293 TestJSONOutput/pause/Command 1.82
299 TestJSONOutput/unpause/Command 2.09
358 TestKubernetesUpgrade 779.88
385 TestPause/serial/Pause 8.64
485 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7200.081
x
+
TestAddons/serial/Volcano (0.39s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-463201 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-463201 addons disable volcano --alsologtostderr -v=1: exit status 11 (391.954519ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:13:35.157153  494985 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:13:35.160550  494985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:13:35.160573  494985 out.go:374] Setting ErrFile to fd 2...
	I1206 10:13:35.160579  494985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:13:35.160920  494985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:13:35.161246  494985 mustload.go:66] Loading cluster: addons-463201
	I1206 10:13:35.161640  494985 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:13:35.161652  494985 addons.go:622] checking whether the cluster is paused
	I1206 10:13:35.161764  494985 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:13:35.161774  494985 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:13:35.162286  494985 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:13:35.181585  494985 ssh_runner.go:195] Run: systemctl --version
	I1206 10:13:35.181642  494985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:13:35.202575  494985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:13:35.391406  494985 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:13:35.391501  494985 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:13:35.425057  494985 cri.go:89] found id: "5e82e51db5bae25edf1ad1f508561386faaf99fd749011f29643c994073fa82b"
	I1206 10:13:35.425083  494985 cri.go:89] found id: "e77ca233e95107b4650f789b058a9248eeafd977bfd0e3e94a8d154fb5be3203"
	I1206 10:13:35.425088  494985 cri.go:89] found id: "c4503b391c863e57c177f1e38dbf4df0e7bf25a9bfce9636b16547f1716b56f8"
	I1206 10:13:35.425091  494985 cri.go:89] found id: "d9cc152585cdfdae2e9fa73bb5a04625634fc98fdb53f8478f515311bf505603"
	I1206 10:13:35.425094  494985 cri.go:89] found id: "9744230520efec6c5427d659b020b8aebb0b8f60700b10a90e7568fb4f02feeb"
	I1206 10:13:35.425098  494985 cri.go:89] found id: "86fe541e9ba32ec7aff4e42066e190257fa61374b4e4a034d6e07bd405c91ae1"
	I1206 10:13:35.425101  494985 cri.go:89] found id: "db3f01d09c58d50b8b04347a9153def1aa334e50b6801cc14c91854950ca696a"
	I1206 10:13:35.425104  494985 cri.go:89] found id: "4c87ded8b1fe686fcfd044face4400839c48d991708ed4488d50c44e540be635"
	I1206 10:13:35.425107  494985 cri.go:89] found id: "daa185ec097b31f4e9c67362d59fdb4151fc734d34ff93626ef2bcc21dd41036"
	I1206 10:13:35.425114  494985 cri.go:89] found id: "219b6958171610a9c3b443e6cc8356719046bd89709701802b0a11664a7582b7"
	I1206 10:13:35.425117  494985 cri.go:89] found id: "5d4b33e25d2b558bac3cb545fadd5a94e7ab0be1221ebd35901d30915d5be267"
	I1206 10:13:35.425120  494985 cri.go:89] found id: "f89b62b37376ad66088e24fafa491c343778d9dd1380d9f7fdfdb93b4d59ba53"
	I1206 10:13:35.425123  494985 cri.go:89] found id: "e83032278589ef0f7ecb2c85a3dcc4c6b5b9568e74082e79289f5260ebb38645"
	I1206 10:13:35.425127  494985 cri.go:89] found id: "0f7b25b5f8b12a8b79f60d93bb10fb9ef6cda6c774739cae2d1ce2050af758c1"
	I1206 10:13:35.425130  494985 cri.go:89] found id: "51c50d8be4bdb4c0166e709b97f80c463fbbc2f2f018a946cf77419a29b72cda"
	I1206 10:13:35.425142  494985 cri.go:89] found id: "775995b1bde6256f0e91cb2ba08cf0f4b811366397f6c0515af6b9b8aa4bdd06"
	I1206 10:13:35.425148  494985 cri.go:89] found id: "c53340c2393c0b3642954671262ecfd1669b5cf00c3682409e1452a943becd27"
	I1206 10:13:35.425153  494985 cri.go:89] found id: "d2c1eed3e4df19803bddda45d1cc596ba92381d494b9bef49dc118075e0e83f3"
	I1206 10:13:35.425156  494985 cri.go:89] found id: "bb2cff19695f37ce069323cfab91760c1fe220c0a3edfc6d40f5233021eafcf3"
	I1206 10:13:35.425159  494985 cri.go:89] found id: "c1f6dd47829edac6b0e0c655e8eda525208d5a754d69d91b2b59d4a9d1200f84"
	I1206 10:13:35.425164  494985 cri.go:89] found id: "0ee8c78f93030e10d80b5a240b46a2f842e44c5e7f15b05425a8ff1f45bee309"
	I1206 10:13:35.425167  494985 cri.go:89] found id: "8372b3ca93930cefd069b1589642fa189999760e5a312f2852a05f1c57eef85b"
	I1206 10:13:35.425170  494985 cri.go:89] found id: "5450d6d68764d73b5b2dff2156681b12550ff54b9d5d6ed472c15683bbf31d5e"
	I1206 10:13:35.425173  494985 cri.go:89] found id: ""
	I1206 10:13:35.425228  494985 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:13:35.456161  494985 out.go:203] 
	W1206 10:13:35.459481  494985 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:13:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:13:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:13:35.459512  494985 out.go:285] * 
	* 
	W1206 10:13:35.466054  494985 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:13:35.469348  494985 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-463201 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.39s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.292403ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-bq87w" [63389ffd-8b62-4e37-9aa3-c3b441f33313] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003624037s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-k4pz5" [ea8be87c-e08c-491b-94f0-c370941e4d8e] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003431308s
addons_test.go:392: (dbg) Run:  kubectl --context addons-463201 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-463201 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-463201 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.125638948s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-463201 ip
2025/12/06 10:14:02 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-463201 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-463201 addons disable registry --alsologtostderr -v=1: exit status 11 (272.333597ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:14:02.368464  495539 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:14:02.369267  495539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:14:02.369284  495539 out.go:374] Setting ErrFile to fd 2...
	I1206 10:14:02.369294  495539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:14:02.369581  495539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:14:02.369922  495539 mustload.go:66] Loading cluster: addons-463201
	I1206 10:14:02.370344  495539 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:14:02.370367  495539 addons.go:622] checking whether the cluster is paused
	I1206 10:14:02.370516  495539 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:14:02.370536  495539 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:14:02.371111  495539 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:14:02.393063  495539 ssh_runner.go:195] Run: systemctl --version
	I1206 10:14:02.393132  495539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:14:02.412269  495539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:14:02.517927  495539 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:14:02.518019  495539 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:14:02.550716  495539 cri.go:89] found id: "5e82e51db5bae25edf1ad1f508561386faaf99fd749011f29643c994073fa82b"
	I1206 10:14:02.550805  495539 cri.go:89] found id: "e77ca233e95107b4650f789b058a9248eeafd977bfd0e3e94a8d154fb5be3203"
	I1206 10:14:02.550835  495539 cri.go:89] found id: "c4503b391c863e57c177f1e38dbf4df0e7bf25a9bfce9636b16547f1716b56f8"
	I1206 10:14:02.550843  495539 cri.go:89] found id: "d9cc152585cdfdae2e9fa73bb5a04625634fc98fdb53f8478f515311bf505603"
	I1206 10:14:02.550847  495539 cri.go:89] found id: "9744230520efec6c5427d659b020b8aebb0b8f60700b10a90e7568fb4f02feeb"
	I1206 10:14:02.550851  495539 cri.go:89] found id: "86fe541e9ba32ec7aff4e42066e190257fa61374b4e4a034d6e07bd405c91ae1"
	I1206 10:14:02.550855  495539 cri.go:89] found id: "db3f01d09c58d50b8b04347a9153def1aa334e50b6801cc14c91854950ca696a"
	I1206 10:14:02.550858  495539 cri.go:89] found id: "4c87ded8b1fe686fcfd044face4400839c48d991708ed4488d50c44e540be635"
	I1206 10:14:02.550861  495539 cri.go:89] found id: "daa185ec097b31f4e9c67362d59fdb4151fc734d34ff93626ef2bcc21dd41036"
	I1206 10:14:02.550886  495539 cri.go:89] found id: "219b6958171610a9c3b443e6cc8356719046bd89709701802b0a11664a7582b7"
	I1206 10:14:02.550892  495539 cri.go:89] found id: "5d4b33e25d2b558bac3cb545fadd5a94e7ab0be1221ebd35901d30915d5be267"
	I1206 10:14:02.550896  495539 cri.go:89] found id: "f89b62b37376ad66088e24fafa491c343778d9dd1380d9f7fdfdb93b4d59ba53"
	I1206 10:14:02.550899  495539 cri.go:89] found id: "e83032278589ef0f7ecb2c85a3dcc4c6b5b9568e74082e79289f5260ebb38645"
	I1206 10:14:02.550903  495539 cri.go:89] found id: "0f7b25b5f8b12a8b79f60d93bb10fb9ef6cda6c774739cae2d1ce2050af758c1"
	I1206 10:14:02.550906  495539 cri.go:89] found id: "51c50d8be4bdb4c0166e709b97f80c463fbbc2f2f018a946cf77419a29b72cda"
	I1206 10:14:02.550914  495539 cri.go:89] found id: "775995b1bde6256f0e91cb2ba08cf0f4b811366397f6c0515af6b9b8aa4bdd06"
	I1206 10:14:02.550918  495539 cri.go:89] found id: "c53340c2393c0b3642954671262ecfd1669b5cf00c3682409e1452a943becd27"
	I1206 10:14:02.550923  495539 cri.go:89] found id: "d2c1eed3e4df19803bddda45d1cc596ba92381d494b9bef49dc118075e0e83f3"
	I1206 10:14:02.550926  495539 cri.go:89] found id: "bb2cff19695f37ce069323cfab91760c1fe220c0a3edfc6d40f5233021eafcf3"
	I1206 10:14:02.550930  495539 cri.go:89] found id: "c1f6dd47829edac6b0e0c655e8eda525208d5a754d69d91b2b59d4a9d1200f84"
	I1206 10:14:02.550935  495539 cri.go:89] found id: "0ee8c78f93030e10d80b5a240b46a2f842e44c5e7f15b05425a8ff1f45bee309"
	I1206 10:14:02.550938  495539 cri.go:89] found id: "8372b3ca93930cefd069b1589642fa189999760e5a312f2852a05f1c57eef85b"
	I1206 10:14:02.550941  495539 cri.go:89] found id: "5450d6d68764d73b5b2dff2156681b12550ff54b9d5d6ed472c15683bbf31d5e"
	I1206 10:14:02.550944  495539 cri.go:89] found id: ""
	I1206 10:14:02.551009  495539 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:14:02.566808  495539 out.go:203] 
	W1206 10:14:02.569902  495539 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:14:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:14:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:14:02.569932  495539 out.go:285] * 
	* 
	W1206 10:14:02.576711  495539 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:14:02.579772  495539 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-463201 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.69s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.041619ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-463201
addons_test.go:332: (dbg) Run:  kubectl --context addons-463201 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-463201 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-463201 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (260.162887ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:14:37.759427  497400 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:14:37.760146  497400 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:14:37.760159  497400 out.go:374] Setting ErrFile to fd 2...
	I1206 10:14:37.760164  497400 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:14:37.760449  497400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:14:37.760747  497400 mustload.go:66] Loading cluster: addons-463201
	I1206 10:14:37.761217  497400 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:14:37.761241  497400 addons.go:622] checking whether the cluster is paused
	I1206 10:14:37.761430  497400 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:14:37.761477  497400 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:14:37.762268  497400 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:14:37.784812  497400 ssh_runner.go:195] Run: systemctl --version
	I1206 10:14:37.784866  497400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:14:37.804205  497400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:14:37.909628  497400 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:14:37.909709  497400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:14:37.937856  497400 cri.go:89] found id: "5e82e51db5bae25edf1ad1f508561386faaf99fd749011f29643c994073fa82b"
	I1206 10:14:37.937879  497400 cri.go:89] found id: "e77ca233e95107b4650f789b058a9248eeafd977bfd0e3e94a8d154fb5be3203"
	I1206 10:14:37.937885  497400 cri.go:89] found id: "c4503b391c863e57c177f1e38dbf4df0e7bf25a9bfce9636b16547f1716b56f8"
	I1206 10:14:37.937888  497400 cri.go:89] found id: "d9cc152585cdfdae2e9fa73bb5a04625634fc98fdb53f8478f515311bf505603"
	I1206 10:14:37.937892  497400 cri.go:89] found id: "9744230520efec6c5427d659b020b8aebb0b8f60700b10a90e7568fb4f02feeb"
	I1206 10:14:37.937895  497400 cri.go:89] found id: "86fe541e9ba32ec7aff4e42066e190257fa61374b4e4a034d6e07bd405c91ae1"
	I1206 10:14:37.937898  497400 cri.go:89] found id: "db3f01d09c58d50b8b04347a9153def1aa334e50b6801cc14c91854950ca696a"
	I1206 10:14:37.937902  497400 cri.go:89] found id: "4c87ded8b1fe686fcfd044face4400839c48d991708ed4488d50c44e540be635"
	I1206 10:14:37.937905  497400 cri.go:89] found id: "daa185ec097b31f4e9c67362d59fdb4151fc734d34ff93626ef2bcc21dd41036"
	I1206 10:14:37.937912  497400 cri.go:89] found id: "219b6958171610a9c3b443e6cc8356719046bd89709701802b0a11664a7582b7"
	I1206 10:14:37.937915  497400 cri.go:89] found id: "5d4b33e25d2b558bac3cb545fadd5a94e7ab0be1221ebd35901d30915d5be267"
	I1206 10:14:37.937919  497400 cri.go:89] found id: "f89b62b37376ad66088e24fafa491c343778d9dd1380d9f7fdfdb93b4d59ba53"
	I1206 10:14:37.937922  497400 cri.go:89] found id: "e83032278589ef0f7ecb2c85a3dcc4c6b5b9568e74082e79289f5260ebb38645"
	I1206 10:14:37.937925  497400 cri.go:89] found id: "0f7b25b5f8b12a8b79f60d93bb10fb9ef6cda6c774739cae2d1ce2050af758c1"
	I1206 10:14:37.937928  497400 cri.go:89] found id: "51c50d8be4bdb4c0166e709b97f80c463fbbc2f2f018a946cf77419a29b72cda"
	I1206 10:14:37.937939  497400 cri.go:89] found id: "775995b1bde6256f0e91cb2ba08cf0f4b811366397f6c0515af6b9b8aa4bdd06"
	I1206 10:14:37.937943  497400 cri.go:89] found id: "c53340c2393c0b3642954671262ecfd1669b5cf00c3682409e1452a943becd27"
	I1206 10:14:37.937950  497400 cri.go:89] found id: "d2c1eed3e4df19803bddda45d1cc596ba92381d494b9bef49dc118075e0e83f3"
	I1206 10:14:37.937953  497400 cri.go:89] found id: "bb2cff19695f37ce069323cfab91760c1fe220c0a3edfc6d40f5233021eafcf3"
	I1206 10:14:37.937956  497400 cri.go:89] found id: "c1f6dd47829edac6b0e0c655e8eda525208d5a754d69d91b2b59d4a9d1200f84"
	I1206 10:14:37.937960  497400 cri.go:89] found id: "0ee8c78f93030e10d80b5a240b46a2f842e44c5e7f15b05425a8ff1f45bee309"
	I1206 10:14:37.937964  497400 cri.go:89] found id: "8372b3ca93930cefd069b1589642fa189999760e5a312f2852a05f1c57eef85b"
	I1206 10:14:37.937972  497400 cri.go:89] found id: "5450d6d68764d73b5b2dff2156681b12550ff54b9d5d6ed472c15683bbf31d5e"
	I1206 10:14:37.937975  497400 cri.go:89] found id: ""
	I1206 10:14:37.938026  497400 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:14:37.954061  497400 out.go:203] 
	W1206 10:14:37.956995  497400 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:14:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:14:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:14:37.957035  497400 out.go:285] * 
	* 
	W1206 10:14:37.963673  497400 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:14:37.966638  497400 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-463201 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (142.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-463201 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-463201 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-463201 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [6e73ef75-c0a7-4902-b414-0cfdff4206ef] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [6e73ef75-c0a7-4902-b414-0cfdff4206ef] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003265917s
I1206 10:14:32.395735  488068 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-463201 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-463201 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.724491973s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-463201 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-463201 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-463201
helpers_test.go:243: (dbg) docker inspect addons-463201:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c07cf5a07d38c5c0b61d0eca204384ecbf549b9785b414eca3aabe03152971dd",
	        "Created": "2025-12-06T10:11:08.484782916Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 489462,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:11:08.55279692Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/c07cf5a07d38c5c0b61d0eca204384ecbf549b9785b414eca3aabe03152971dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c07cf5a07d38c5c0b61d0eca204384ecbf549b9785b414eca3aabe03152971dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/c07cf5a07d38c5c0b61d0eca204384ecbf549b9785b414eca3aabe03152971dd/hosts",
	        "LogPath": "/var/lib/docker/containers/c07cf5a07d38c5c0b61d0eca204384ecbf549b9785b414eca3aabe03152971dd/c07cf5a07d38c5c0b61d0eca204384ecbf549b9785b414eca3aabe03152971dd-json.log",
	        "Name": "/addons-463201",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-463201:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-463201",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c07cf5a07d38c5c0b61d0eca204384ecbf549b9785b414eca3aabe03152971dd",
	                "LowerDir": "/var/lib/docker/overlay2/471c74bf40180ffd113d45a194c9694660f8335b68d71832a9104dfb002b441e-init/diff:/var/lib/docker/overlay2/cc06c0f1f442a7275dc247974ca9074508813cfb842de89bc5bb1dae1e824222/diff",
	                "MergedDir": "/var/lib/docker/overlay2/471c74bf40180ffd113d45a194c9694660f8335b68d71832a9104dfb002b441e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/471c74bf40180ffd113d45a194c9694660f8335b68d71832a9104dfb002b441e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/471c74bf40180ffd113d45a194c9694660f8335b68d71832a9104dfb002b441e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-463201",
	                "Source": "/var/lib/docker/volumes/addons-463201/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-463201",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-463201",
	                "name.minikube.sigs.k8s.io": "addons-463201",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cea12389f1ee0eca5fa8bb4ed39747af46d36780140f393641f05623c4d3f35c",
	            "SandboxKey": "/var/run/docker/netns/cea12389f1ee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-463201": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:8f:1c:2e:15:ae",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5ab4f6c223ab761207584f4f06b2cac589555936c8e4d745cd40ab7028f06221",
	                    "EndpointID": "5aa636b3ff4619d7ae6aadb26df1a1f81577ec1cfb6e189cf9b0d10f13e9977e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-463201",
	                        "c07cf5a07d38"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-463201 -n addons-463201
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-463201 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-463201 logs -n 25: (1.496066271s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-001536                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-001536 │ jenkins │ v1.37.0 │ 06 Dec 25 10:11 UTC │ 06 Dec 25 10:11 UTC │
	│ start   │ --download-only -p binary-mirror-243935 --alsologtostderr --binary-mirror http://127.0.0.1:42811 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-243935   │ jenkins │ v1.37.0 │ 06 Dec 25 10:11 UTC │                     │
	│ delete  │ -p binary-mirror-243935                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-243935   │ jenkins │ v1.37.0 │ 06 Dec 25 10:11 UTC │ 06 Dec 25 10:11 UTC │
	│ addons  │ disable dashboard -p addons-463201                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:11 UTC │                     │
	│ addons  │ enable dashboard -p addons-463201                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:11 UTC │                     │
	│ start   │ -p addons-463201 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:11 UTC │ 06 Dec 25 10:13 UTC │
	│ addons  │ addons-463201 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:13 UTC │                     │
	│ addons  │ addons-463201 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:13 UTC │                     │
	│ addons  │ addons-463201 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:13 UTC │                     │
	│ addons  │ addons-463201 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:13 UTC │                     │
	│ ip      │ addons-463201 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:14 UTC │ 06 Dec 25 10:14 UTC │
	│ addons  │ addons-463201 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:14 UTC │                     │
	│ ssh     │ addons-463201 ssh cat /opt/local-path-provisioner/pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:14 UTC │ 06 Dec 25 10:14 UTC │
	│ addons  │ addons-463201 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:14 UTC │                     │
	│ addons  │ addons-463201 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:14 UTC │                     │
	│ addons  │ enable headlamp -p addons-463201 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:14 UTC │                     │
	│ addons  │ addons-463201 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:14 UTC │                     │
	│ addons  │ addons-463201 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:14 UTC │                     │
	│ addons  │ addons-463201 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:14 UTC │                     │
	│ ssh     │ addons-463201 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:14 UTC │                     │
	│ addons  │ addons-463201 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:14 UTC │                     │
	│ addons  │ addons-463201 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:14 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-463201                                                                                                                                                                                                                                                                                                                                                                                           │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:14 UTC │ 06 Dec 25 10:14 UTC │
	│ addons  │ addons-463201 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:14 UTC │                     │
	│ ip      │ addons-463201 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │ 06 Dec 25 10:16 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:11:02
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:11:02.269754  489065 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:11:02.269946  489065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:11:02.269988  489065 out.go:374] Setting ErrFile to fd 2...
	I1206 10:11:02.270011  489065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:11:02.270374  489065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:11:02.270962  489065 out.go:368] Setting JSON to false
	I1206 10:11:02.272348  489065 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10414,"bootTime":1765005449,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:11:02.272455  489065 start.go:143] virtualization:  
	I1206 10:11:02.275746  489065 out.go:179] * [addons-463201] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:11:02.279380  489065 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:11:02.279506  489065 notify.go:221] Checking for updates...
	I1206 10:11:02.285080  489065 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:11:02.288029  489065 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:11:02.290818  489065 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:11:02.293719  489065 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:11:02.296699  489065 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:11:02.299709  489065 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:11:02.327195  489065 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:11:02.327321  489065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:11:02.386522  489065 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-06 10:11:02.377345521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:11:02.386626  489065 docker.go:319] overlay module found
	I1206 10:11:02.389850  489065 out.go:179] * Using the docker driver based on user configuration
	I1206 10:11:02.392781  489065 start.go:309] selected driver: docker
	I1206 10:11:02.392824  489065 start.go:927] validating driver "docker" against <nil>
	I1206 10:11:02.392839  489065 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:11:02.393691  489065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:11:02.453306  489065 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-06 10:11:02.444232934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:11:02.453465  489065 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 10:11:02.453700  489065 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:11:02.456659  489065 out.go:179] * Using Docker driver with root privileges
	I1206 10:11:02.459518  489065 cni.go:84] Creating CNI manager for ""
	I1206 10:11:02.459589  489065 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:11:02.459603  489065 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 10:11:02.459701  489065 start.go:353] cluster config:
	{Name:addons-463201 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-463201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1206 10:11:02.462832  489065 out.go:179] * Starting "addons-463201" primary control-plane node in "addons-463201" cluster
	I1206 10:11:02.465781  489065 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:11:02.468908  489065 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:11:02.471818  489065 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 10:11:02.471876  489065 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1206 10:11:02.471889  489065 cache.go:65] Caching tarball of preloaded images
	I1206 10:11:02.471893  489065 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:11:02.471971  489065 preload.go:238] Found /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:11:02.471982  489065 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 10:11:02.472365  489065 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/config.json ...
	I1206 10:11:02.472384  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/config.json: {Name:mk4121c83f831a50388a5d275aaf3116a37fda3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:02.491579  489065 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:11:02.491604  489065 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:11:02.491624  489065 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:11:02.491657  489065 start.go:360] acquireMachinesLock for addons-463201: {Name:mke5e16993baf13ed5da7fa3be575b8b2edba38c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:11:02.491780  489065 start.go:364] duration metric: took 102.144µs to acquireMachinesLock for "addons-463201"
	I1206 10:11:02.491811  489065 start.go:93] Provisioning new machine with config: &{Name:addons-463201 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-463201 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:11:02.491879  489065 start.go:125] createHost starting for "" (driver="docker")
	I1206 10:11:02.495228  489065 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1206 10:11:02.495462  489065 start.go:159] libmachine.API.Create for "addons-463201" (driver="docker")
	I1206 10:11:02.495502  489065 client.go:173] LocalClient.Create starting
	I1206 10:11:02.495638  489065 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem
	I1206 10:11:02.793783  489065 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem
	I1206 10:11:03.144660  489065 cli_runner.go:164] Run: docker network inspect addons-463201 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 10:11:03.162973  489065 cli_runner.go:211] docker network inspect addons-463201 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 10:11:03.163056  489065 network_create.go:284] running [docker network inspect addons-463201] to gather additional debugging logs...
	I1206 10:11:03.163079  489065 cli_runner.go:164] Run: docker network inspect addons-463201
	W1206 10:11:03.180429  489065 cli_runner.go:211] docker network inspect addons-463201 returned with exit code 1
	I1206 10:11:03.180461  489065 network_create.go:287] error running [docker network inspect addons-463201]: docker network inspect addons-463201: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-463201 not found
	I1206 10:11:03.180489  489065 network_create.go:289] output of [docker network inspect addons-463201]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-463201 not found
	
	** /stderr **
	I1206 10:11:03.180584  489065 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:11:03.197239  489065 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018c7930}
	I1206 10:11:03.197279  489065 network_create.go:124] attempt to create docker network addons-463201 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1206 10:11:03.197344  489065 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-463201 addons-463201
	I1206 10:11:03.260960  489065 network_create.go:108] docker network addons-463201 192.168.49.0/24 created
	I1206 10:11:03.260993  489065 kic.go:121] calculated static IP "192.168.49.2" for the "addons-463201" container
	I1206 10:11:03.261066  489065 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 10:11:03.277897  489065 cli_runner.go:164] Run: docker volume create addons-463201 --label name.minikube.sigs.k8s.io=addons-463201 --label created_by.minikube.sigs.k8s.io=true
	I1206 10:11:03.296953  489065 oci.go:103] Successfully created a docker volume addons-463201
	I1206 10:11:03.297040  489065 cli_runner.go:164] Run: docker run --rm --name addons-463201-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-463201 --entrypoint /usr/bin/test -v addons-463201:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 10:11:04.434538  489065 cli_runner.go:217] Completed: docker run --rm --name addons-463201-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-463201 --entrypoint /usr/bin/test -v addons-463201:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib: (1.137443232s)
	I1206 10:11:04.434568  489065 oci.go:107] Successfully prepared a docker volume addons-463201
	I1206 10:11:04.434611  489065 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 10:11:04.434622  489065 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 10:11:04.434689  489065 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-463201:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 10:11:08.405952  489065 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-463201:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.971222465s)
	I1206 10:11:08.405986  489065 kic.go:203] duration metric: took 3.971359875s to extract preloaded images to volume ...
	W1206 10:11:08.406134  489065 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 10:11:08.406247  489065 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 10:11:08.461925  489065 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-463201 --name addons-463201 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-463201 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-463201 --network addons-463201 --ip 192.168.49.2 --volume addons-463201:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 10:11:08.781254  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Running}}
	I1206 10:11:08.810212  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:08.836004  489065 cli_runner.go:164] Run: docker exec addons-463201 stat /var/lib/dpkg/alternatives/iptables
	I1206 10:11:08.906686  489065 oci.go:144] the created container "addons-463201" has a running status.
	I1206 10:11:08.906712  489065 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa...
	I1206 10:11:09.196957  489065 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 10:11:09.224761  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:09.250676  489065 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 10:11:09.250701  489065 kic_runner.go:114] Args: [docker exec --privileged addons-463201 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 10:11:09.307223  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:09.325186  489065 machine.go:94] provisionDockerMachine start ...
	I1206 10:11:09.325292  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:09.343250  489065 main.go:143] libmachine: Using SSH client type: native
	I1206 10:11:09.343578  489065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1206 10:11:09.343593  489065 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:11:09.344279  489065 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 10:11:12.498909  489065 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-463201
	
	I1206 10:11:12.498933  489065 ubuntu.go:182] provisioning hostname "addons-463201"
	I1206 10:11:12.499002  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:12.517843  489065 main.go:143] libmachine: Using SSH client type: native
	I1206 10:11:12.518219  489065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1206 10:11:12.518238  489065 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-463201 && echo "addons-463201" | sudo tee /etc/hostname
	I1206 10:11:12.681238  489065 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-463201
	
	I1206 10:11:12.681373  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:12.700299  489065 main.go:143] libmachine: Using SSH client type: native
	I1206 10:11:12.700611  489065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1206 10:11:12.700634  489065 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-463201' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-463201/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-463201' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:11:12.855593  489065 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:11:12.855618  489065 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-484819/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-484819/.minikube}
	I1206 10:11:12.855641  489065 ubuntu.go:190] setting up certificates
	I1206 10:11:12.855651  489065 provision.go:84] configureAuth start
	I1206 10:11:12.855743  489065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-463201
	I1206 10:11:12.872432  489065 provision.go:143] copyHostCerts
	I1206 10:11:12.872523  489065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem (1082 bytes)
	I1206 10:11:12.872651  489065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem (1123 bytes)
	I1206 10:11:12.872704  489065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem (1675 bytes)
	I1206 10:11:12.872750  489065 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem org=jenkins.addons-463201 san=[127.0.0.1 192.168.49.2 addons-463201 localhost minikube]
	I1206 10:11:13.054543  489065 provision.go:177] copyRemoteCerts
	I1206 10:11:13.054614  489065 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:11:13.054655  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:13.072005  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:13.179085  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 10:11:13.197075  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:11:13.214820  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 10:11:13.233154  489065 provision.go:87] duration metric: took 377.478573ms to configureAuth
	I1206 10:11:13.233181  489065 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:11:13.233371  489065 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:11:13.233486  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:13.249904  489065 main.go:143] libmachine: Using SSH client type: native
	I1206 10:11:13.250233  489065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1206 10:11:13.250253  489065 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:11:13.728181  489065 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:11:13.728206  489065 machine.go:97] duration metric: took 4.402996843s to provisionDockerMachine
	I1206 10:11:13.728216  489065 client.go:176] duration metric: took 11.232703569s to LocalClient.Create
	I1206 10:11:13.728229  489065 start.go:167] duration metric: took 11.232768486s to libmachine.API.Create "addons-463201"
	I1206 10:11:13.728237  489065 start.go:293] postStartSetup for "addons-463201" (driver="docker")
	I1206 10:11:13.728253  489065 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:11:13.728324  489065 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:11:13.728386  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:13.750415  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:13.855568  489065 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:11:13.858958  489065 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:11:13.858986  489065 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:11:13.859000  489065 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/addons for local assets ...
	I1206 10:11:13.859072  489065 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/files for local assets ...
	I1206 10:11:13.859094  489065 start.go:296] duration metric: took 130.845682ms for postStartSetup
	I1206 10:11:13.859450  489065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-463201
	I1206 10:11:13.876285  489065 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/config.json ...
	I1206 10:11:13.876577  489065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:11:13.876642  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:13.893434  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:13.996286  489065 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:11:14.002858  489065 start.go:128] duration metric: took 11.510963595s to createHost
	I1206 10:11:14.002948  489065 start.go:83] releasing machines lock for "addons-463201", held for 11.511154787s
	I1206 10:11:14.003078  489065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-463201
	I1206 10:11:14.027196  489065 ssh_runner.go:195] Run: cat /version.json
	I1206 10:11:14.027248  489065 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:11:14.027319  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:14.027253  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:14.045457  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:14.045893  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:14.150949  489065 ssh_runner.go:195] Run: systemctl --version
	I1206 10:11:14.242685  489065 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:11:14.286319  489065 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:11:14.290743  489065 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:11:14.290841  489065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:11:14.318911  489065 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 10:11:14.318948  489065 start.go:496] detecting cgroup driver to use...
	I1206 10:11:14.318981  489065 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:11:14.319052  489065 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:11:14.336536  489065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:11:14.349423  489065 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:11:14.349519  489065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:11:14.366957  489065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:11:14.385438  489065 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:11:14.504429  489065 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:11:14.658975  489065 docker.go:234] disabling docker service ...
	I1206 10:11:14.659090  489065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:11:14.686749  489065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:11:14.700932  489065 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:11:14.831039  489065 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:11:14.943693  489065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:11:14.957826  489065 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:11:14.972477  489065 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:11:14.972545  489065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:11:14.981569  489065 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:11:14.981662  489065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:11:14.990631  489065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:11:15.006982  489065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:11:15.027263  489065 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:11:15.037567  489065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:11:15.047895  489065 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:11:15.063326  489065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:11:15.073367  489065 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:11:15.081528  489065 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:11:15.089554  489065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:11:15.199546  489065 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:11:15.371267  489065 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:11:15.371394  489065 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:11:15.375264  489065 start.go:564] Will wait 60s for crictl version
	I1206 10:11:15.375342  489065 ssh_runner.go:195] Run: which crictl
	I1206 10:11:15.378925  489065 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:11:15.410728  489065 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:11:15.410859  489065 ssh_runner.go:195] Run: crio --version
	I1206 10:11:15.442718  489065 ssh_runner.go:195] Run: crio --version
	I1206 10:11:15.476347  489065 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 10:11:15.479252  489065 cli_runner.go:164] Run: docker network inspect addons-463201 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:11:15.495453  489065 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:11:15.499480  489065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:11:15.509154  489065 kubeadm.go:884] updating cluster {Name:addons-463201 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-463201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:11:15.509282  489065 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 10:11:15.509339  489065 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:11:15.549759  489065 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:11:15.549785  489065 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:11:15.549841  489065 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:11:15.575079  489065 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:11:15.575103  489065 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:11:15.575111  489065 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1206 10:11:15.575257  489065 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-463201 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-463201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:11:15.575364  489065 ssh_runner.go:195] Run: crio config
	I1206 10:11:15.646595  489065 cni.go:84] Creating CNI manager for ""
	I1206 10:11:15.646618  489065 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:11:15.646634  489065 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:11:15.646658  489065 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-463201 NodeName:addons-463201 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:11:15.646792  489065 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-463201"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:11:15.646869  489065 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 10:11:15.654693  489065 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:11:15.654813  489065 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:11:15.662402  489065 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1206 10:11:15.674984  489065 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 10:11:15.688520  489065 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1206 10:11:15.701891  489065 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:11:15.706749  489065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:11:15.717000  489065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:11:15.858291  489065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:11:15.876427  489065 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201 for IP: 192.168.49.2
	I1206 10:11:15.876450  489065 certs.go:195] generating shared ca certs ...
	I1206 10:11:15.876466  489065 certs.go:227] acquiring lock for ca certs: {Name:mk654f77abd8383620ce6ddae56f2a6a8c1d96d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:15.876680  489065 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key
	I1206 10:11:16.045942  489065 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt ...
	I1206 10:11:16.045975  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt: {Name:mk3118e07c47df7b147fdee8b9a1528f37e11089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:16.046218  489065 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key ...
	I1206 10:11:16.046234  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key: {Name:mk8407584158b4a98229fa6be2ab9e28cf251cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:16.046330  489065 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key
	I1206 10:11:16.267154  489065 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt ...
	I1206 10:11:16.267185  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt: {Name:mk8bfec8e2ea314a06020d5dc8d08c9364ec5f13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:16.267362  489065 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key ...
	I1206 10:11:16.267372  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key: {Name:mkd100ae304037f849fef9c412ec498fd7af0314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:16.267459  489065 certs.go:257] generating profile certs ...
	I1206 10:11:16.267521  489065 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.key
	I1206 10:11:16.267539  489065 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt with IP's: []
	I1206 10:11:16.379520  489065 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt ...
	I1206 10:11:16.379589  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: {Name:mk27499a800b2fbf1affd96cacf4ca3c735b011c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:16.379771  489065 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.key ...
	I1206 10:11:16.379786  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.key: {Name:mk58fadde495cc32056c31562edb06cc7ec3af9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:16.379889  489065 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.key.da4af44c
	I1206 10:11:16.379908  489065 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.crt.da4af44c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1206 10:11:16.436080  489065 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.crt.da4af44c ...
	I1206 10:11:16.436108  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.crt.da4af44c: {Name:mk885a56a7d3b4747cb1c2be691a25e351cd2427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:16.436296  489065 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.key.da4af44c ...
	I1206 10:11:16.436313  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.key.da4af44c: {Name:mk0b4adc52d4ea35e8b10c39e6a7fe76de1140ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:16.436405  489065 certs.go:382] copying /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.crt.da4af44c -> /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.crt
	I1206 10:11:16.436489  489065 certs.go:386] copying /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.key.da4af44c -> /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.key
	I1206 10:11:16.436547  489065 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/proxy-client.key
	I1206 10:11:16.436566  489065 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/proxy-client.crt with IP's: []
	I1206 10:11:17.292151  489065 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/proxy-client.crt ...
	I1206 10:11:17.292185  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/proxy-client.crt: {Name:mk5c3cdb3197d7b3590a183324c49c9c6943febe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:17.292382  489065 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/proxy-client.key ...
	I1206 10:11:17.292396  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/proxy-client.key: {Name:mka2e3533bb772896abff489310977c7be04e583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:17.292591  489065 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 10:11:17.292636  489065 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:11:17.292669  489065 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:11:17.292705  489065 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem (1675 bytes)
	I1206 10:11:17.293284  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:11:17.311761  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:11:17.331058  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:11:17.349590  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 10:11:17.368475  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 10:11:17.385758  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 10:11:17.402970  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:11:17.421060  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:11:17.441923  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:11:17.461692  489065 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:11:17.476056  489065 ssh_runner.go:195] Run: openssl version
	I1206 10:11:17.485202  489065 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:11:17.492424  489065 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:11:17.500103  489065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:11:17.504428  489065 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:11:17.504552  489065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:11:17.545792  489065 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:11:17.553772  489065 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 10:11:17.561273  489065 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:11:17.564781  489065 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 10:11:17.564830  489065 kubeadm.go:401] StartCluster: {Name:addons-463201 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-463201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:11:17.564903  489065 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:11:17.564962  489065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:11:17.592859  489065 cri.go:89] found id: ""
	I1206 10:11:17.592939  489065 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:11:17.601161  489065 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:11:17.608969  489065 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:11:17.609052  489065 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:11:17.616875  489065 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:11:17.616906  489065 kubeadm.go:158] found existing configuration files:
	
	I1206 10:11:17.616968  489065 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 10:11:17.624853  489065 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:11:17.624921  489065 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:11:17.632342  489065 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 10:11:17.640197  489065 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:11:17.640276  489065 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:11:17.648009  489065 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 10:11:17.655795  489065 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:11:17.655906  489065 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:11:17.663588  489065 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 10:11:17.671327  489065 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:11:17.671413  489065 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:11:17.679013  489065 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:11:17.723926  489065 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 10:11:17.724247  489065 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:11:17.747604  489065 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:11:17.747757  489065 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:11:17.747837  489065 kubeadm.go:319] OS: Linux
	I1206 10:11:17.747931  489065 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:11:17.748026  489065 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:11:17.748133  489065 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:11:17.748228  489065 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:11:17.748324  489065 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:11:17.748414  489065 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:11:17.748494  489065 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:11:17.748575  489065 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:11:17.748657  489065 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:11:17.812048  489065 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:11:17.812163  489065 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:11:17.812259  489065 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:11:17.823524  489065 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:11:17.829584  489065 out.go:252]   - Generating certificates and keys ...
	I1206 10:11:17.829681  489065 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:11:17.829756  489065 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:11:18.449376  489065 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 10:11:19.180286  489065 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 10:11:19.859593  489065 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 10:11:20.215542  489065 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 10:11:21.447289  489065 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 10:11:21.447680  489065 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-463201 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 10:11:21.898827  489065 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 10:11:21.898965  489065 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-463201 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 10:11:22.064274  489065 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 10:11:22.401308  489065 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 10:11:23.085929  489065 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 10:11:23.086342  489065 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:11:23.241877  489065 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:11:23.442731  489065 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:11:24.923661  489065 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:11:25.866232  489065 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:11:26.487963  489065 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:11:26.489074  489065 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:11:26.492103  489065 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:11:26.495635  489065 out.go:252]   - Booting up control plane ...
	I1206 10:11:26.495745  489065 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:11:26.495824  489065 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:11:26.497061  489065 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:11:26.518357  489065 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:11:26.518472  489065 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:11:26.526760  489065 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:11:26.526863  489065 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:11:26.526903  489065 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:11:26.656781  489065 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:11:26.656902  489065 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:11:27.657406  489065 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000907458s
	I1206 10:11:27.663522  489065 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 10:11:27.663631  489065 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1206 10:11:27.663721  489065 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 10:11:27.663805  489065 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 10:11:30.783893  489065 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.120820254s
	I1206 10:11:32.539254  489065 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.876665656s
	I1206 10:11:34.164406  489065 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501524263s
	I1206 10:11:34.198095  489065 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 10:11:34.214027  489065 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 10:11:34.232206  489065 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 10:11:34.232412  489065 kubeadm.go:319] [mark-control-plane] Marking the node addons-463201 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 10:11:34.247769  489065 kubeadm.go:319] [bootstrap-token] Using token: 1a4eby.57hnzzmqzzg8bz87
	I1206 10:11:34.250765  489065 out.go:252]   - Configuring RBAC rules ...
	I1206 10:11:34.250903  489065 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 10:11:34.261120  489065 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 10:11:34.273801  489065 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 10:11:34.278959  489065 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 10:11:34.286250  489065 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 10:11:34.294158  489065 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 10:11:34.571859  489065 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 10:11:35.016721  489065 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 10:11:35.571386  489065 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 10:11:35.572495  489065 kubeadm.go:319] 
	I1206 10:11:35.572593  489065 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 10:11:35.572607  489065 kubeadm.go:319] 
	I1206 10:11:35.572709  489065 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 10:11:35.572715  489065 kubeadm.go:319] 
	I1206 10:11:35.572765  489065 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 10:11:35.572849  489065 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 10:11:35.572906  489065 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 10:11:35.572916  489065 kubeadm.go:319] 
	I1206 10:11:35.572985  489065 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 10:11:35.572995  489065 kubeadm.go:319] 
	I1206 10:11:35.573054  489065 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 10:11:35.573065  489065 kubeadm.go:319] 
	I1206 10:11:35.573135  489065 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 10:11:35.573223  489065 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 10:11:35.573297  489065 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 10:11:35.573309  489065 kubeadm.go:319] 
	I1206 10:11:35.573398  489065 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 10:11:35.573477  489065 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 10:11:35.573485  489065 kubeadm.go:319] 
	I1206 10:11:35.573569  489065 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1a4eby.57hnzzmqzzg8bz87 \
	I1206 10:11:35.573676  489065 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:17404bed1e42f06c637e5edf0fb99872362c7da5e3019dba692c7ce2802c61f1 \
	I1206 10:11:35.573701  489065 kubeadm.go:319] 	--control-plane 
	I1206 10:11:35.573709  489065 kubeadm.go:319] 
	I1206 10:11:35.573794  489065 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 10:11:35.573801  489065 kubeadm.go:319] 
	I1206 10:11:35.573884  489065 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1a4eby.57hnzzmqzzg8bz87 \
	I1206 10:11:35.573990  489065 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:17404bed1e42f06c637e5edf0fb99872362c7da5e3019dba692c7ce2802c61f1 
	I1206 10:11:35.577452  489065 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1206 10:11:35.577682  489065 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:11:35.577792  489065 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:11:35.577815  489065 cni.go:84] Creating CNI manager for ""
	I1206 10:11:35.577823  489065 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:11:35.581101  489065 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 10:11:35.583985  489065 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 10:11:35.588065  489065 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 10:11:35.588085  489065 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 10:11:35.602024  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 10:11:35.899396  489065 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 10:11:35.899543  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:35.899639  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-463201 minikube.k8s.io/updated_at=2025_12_06T10_11_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=addons-463201 minikube.k8s.io/primary=true
	I1206 10:11:36.017699  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:36.078889  489065 ops.go:34] apiserver oom_adj: -16
	I1206 10:11:36.518482  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:37.018675  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:37.518115  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:38.018200  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:38.518519  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:39.018468  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:39.518350  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:40.021979  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:40.518661  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:40.616346  489065 kubeadm.go:1114] duration metric: took 4.716863675s to wait for elevateKubeSystemPrivileges
	I1206 10:11:40.616380  489065 kubeadm.go:403] duration metric: took 23.051554111s to StartCluster
	I1206 10:11:40.616399  489065 settings.go:142] acquiring lock: {Name:mk7eec112652eae38dac4afce804445d9092bd29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:40.616525  489065 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:11:40.616971  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/kubeconfig: {Name:mk884a72161ed5cd0cfdbffc4a21f277282d705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:40.617170  489065 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:11:40.617315  489065 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 10:11:40.617574  489065 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:11:40.617617  489065 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1206 10:11:40.617703  489065 addons.go:70] Setting yakd=true in profile "addons-463201"
	I1206 10:11:40.617721  489065 addons.go:239] Setting addon yakd=true in "addons-463201"
	I1206 10:11:40.617743  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.618243  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.618767  489065 addons.go:70] Setting inspektor-gadget=true in profile "addons-463201"
	I1206 10:11:40.618790  489065 addons.go:239] Setting addon inspektor-gadget=true in "addons-463201"
	I1206 10:11:40.618815  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.618938  489065 addons.go:70] Setting metrics-server=true in profile "addons-463201"
	I1206 10:11:40.618954  489065 addons.go:239] Setting addon metrics-server=true in "addons-463201"
	I1206 10:11:40.618974  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.619300  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.619404  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.619826  489065 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-463201"
	I1206 10:11:40.619849  489065 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-463201"
	I1206 10:11:40.619871  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.620269  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.621651  489065 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-463201"
	I1206 10:11:40.621685  489065 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-463201"
	I1206 10:11:40.621721  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.622193  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.627576  489065 addons.go:70] Setting cloud-spanner=true in profile "addons-463201"
	I1206 10:11:40.627610  489065 addons.go:239] Setting addon cloud-spanner=true in "addons-463201"
	I1206 10:11:40.627652  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.628108  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.633292  489065 addons.go:70] Setting registry=true in profile "addons-463201"
	I1206 10:11:40.633375  489065 addons.go:239] Setting addon registry=true in "addons-463201"
	I1206 10:11:40.633427  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.633931  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.637937  489065 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-463201"
	I1206 10:11:40.638003  489065 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-463201"
	I1206 10:11:40.638035  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.638509  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.662888  489065 addons.go:70] Setting default-storageclass=true in profile "addons-463201"
	I1206 10:11:40.662918  489065 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-463201"
	I1206 10:11:40.663273  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.666072  489065 addons.go:70] Setting registry-creds=true in profile "addons-463201"
	I1206 10:11:40.666151  489065 addons.go:239] Setting addon registry-creds=true in "addons-463201"
	I1206 10:11:40.666216  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.666795  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.673569  489065 addons.go:70] Setting gcp-auth=true in profile "addons-463201"
	I1206 10:11:40.673602  489065 mustload.go:66] Loading cluster: addons-463201
	I1206 10:11:40.673821  489065 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:11:40.674083  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.677227  489065 addons.go:70] Setting ingress=true in profile "addons-463201"
	I1206 10:11:40.677252  489065 addons.go:239] Setting addon ingress=true in "addons-463201"
	I1206 10:11:40.677297  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.677750  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.688560  489065 addons.go:70] Setting storage-provisioner=true in profile "addons-463201"
	I1206 10:11:40.688589  489065 addons.go:239] Setting addon storage-provisioner=true in "addons-463201"
	I1206 10:11:40.688624  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.689100  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.692179  489065 addons.go:70] Setting ingress-dns=true in profile "addons-463201"
	I1206 10:11:40.692313  489065 addons.go:239] Setting addon ingress-dns=true in "addons-463201"
	I1206 10:11:40.692358  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.692822  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.712367  489065 out.go:179] * Verifying Kubernetes components...
	I1206 10:11:40.714619  489065 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-463201"
	I1206 10:11:40.714649  489065 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-463201"
	I1206 10:11:40.714992  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.719109  489065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:11:40.734182  489065 addons.go:70] Setting volcano=true in profile "addons-463201"
	I1206 10:11:40.734223  489065 addons.go:239] Setting addon volcano=true in "addons-463201"
	I1206 10:11:40.734258  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.734727  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.768480  489065 addons.go:70] Setting volumesnapshots=true in profile "addons-463201"
	I1206 10:11:40.768510  489065 addons.go:239] Setting addon volumesnapshots=true in "addons-463201"
	I1206 10:11:40.768554  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.769034  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.799247  489065 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1206 10:11:40.816872  489065 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1206 10:11:40.827367  489065 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1206 10:11:40.852397  489065 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1206 10:11:40.903642  489065 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 10:11:40.903729  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1206 10:11:40.903868  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:40.893288  489065 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 10:11:40.914544  489065 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 10:11:40.914643  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:40.914969  489065 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1206 10:11:40.915093  489065 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1206 10:11:40.893354  489065 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 10:11:40.928346  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1206 10:11:40.928414  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:40.940153  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.965568  489065 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1206 10:11:40.902572  489065 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1206 10:11:40.966080  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1206 10:11:40.966167  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:40.975350  489065 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1206 10:11:40.976501  489065 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1206 10:11:40.976616  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:40.975571  489065 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 10:11:40.975601  489065 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1206 10:11:40.976359  489065 addons.go:239] Setting addon default-storageclass=true in "addons-463201"
	W1206 10:11:40.978648  489065 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1206 10:11:40.979207  489065 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1206 10:11:40.979221  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1206 10:11:40.979273  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:40.983679  489065 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1206 10:11:40.983776  489065 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1206 10:11:40.985250  489065 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-463201"
	I1206 10:11:40.985285  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.985708  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:41.004764  489065 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1206 10:11:41.004787  489065 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1206 10:11:41.004875  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:41.019117  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1206 10:11:41.019219  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:41.051339  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:41.051807  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:41.059966  489065 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1206 10:11:41.060187  489065 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1206 10:11:41.063641  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.064837  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.068633  489065 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:11:41.068888  489065 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 10:11:41.068902  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1206 10:11:41.068961  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:41.082420  489065 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1206 10:11:41.088348  489065 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1206 10:11:41.094199  489065 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1206 10:11:41.097173  489065 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1206 10:11:41.098022  489065 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1206 10:11:41.100000  489065 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:11:41.100030  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:11:41.100110  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:41.122178  489065 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1206 10:11:41.144943  489065 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1206 10:11:41.145124  489065 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1206 10:11:41.152715  489065 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1206 10:11:41.153066  489065 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 10:11:41.153098  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1206 10:11:41.153197  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:41.159218  489065 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1206 10:11:41.159302  489065 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1206 10:11:41.159415  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:41.173658  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.178455  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.187998  489065 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1206 10:11:41.192724  489065 out.go:179]   - Using image docker.io/registry:3.0.0
	I1206 10:11:41.193183  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.197180  489065 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1206 10:11:41.197557  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1206 10:11:41.198041  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:41.201406  489065 out.go:179]   - Using image docker.io/busybox:stable
	I1206 10:11:41.205483  489065 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 10:11:41.205556  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1206 10:11:41.205676  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:41.239201  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.262921  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.264938  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.280840  489065 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:11:41.280860  489065 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:11:41.280922  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:41.306188  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.318376  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.319169  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	W1206 10:11:41.324002  489065 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 10:11:41.324046  489065 retry.go:31] will retry after 144.072424ms: ssh: handshake failed: EOF
	I1206 10:11:41.334766  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.364070  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.364567  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	W1206 10:11:41.366862  489065 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 10:11:41.366886  489065 retry.go:31] will retry after 248.774351ms: ssh: handshake failed: EOF
	I1206 10:11:41.372972  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.549943  489065 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 10:11:41.550093  489065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:11:41.787545  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 10:11:41.831419  489065 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1206 10:11:41.831448  489065 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1206 10:11:41.848217  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 10:11:41.853783  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 10:11:41.869340  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 10:11:41.884658  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1206 10:11:41.907644  489065 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1206 10:11:41.907669  489065 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1206 10:11:41.912284  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 10:11:41.924561  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1206 10:11:41.942415  489065 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1206 10:11:41.942454  489065 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1206 10:11:41.966600  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:11:41.995664  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 10:11:41.998191  489065 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1206 10:11:41.998221  489065 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1206 10:11:42.053868  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:11:42.072519  489065 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1206 10:11:42.072561  489065 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1206 10:11:42.076432  489065 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 10:11:42.076464  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1206 10:11:42.280959  489065 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1206 10:11:42.280988  489065 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1206 10:11:42.282181  489065 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1206 10:11:42.282207  489065 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1206 10:11:42.304170  489065 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 10:11:42.304209  489065 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 10:11:42.346480  489065 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1206 10:11:42.346509  489065 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1206 10:11:42.352888  489065 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1206 10:11:42.352931  489065 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1206 10:11:42.412861  489065 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1206 10:11:42.412903  489065 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1206 10:11:42.489366  489065 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1206 10:11:42.489398  489065 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1206 10:11:42.491044  489065 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 10:11:42.491086  489065 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 10:11:42.499504  489065 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1206 10:11:42.499538  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1206 10:11:42.579531  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 10:11:42.583773  489065 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1206 10:11:42.583795  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1206 10:11:42.685171  489065 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1206 10:11:42.685197  489065 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1206 10:11:42.730677  489065 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1206 10:11:42.730748  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1206 10:11:42.736431  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1206 10:11:42.739554  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1206 10:11:42.912637  489065 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1206 10:11:42.912730  489065 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1206 10:11:42.948082  489065 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1206 10:11:42.948143  489065 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1206 10:11:43.118473  489065 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 10:11:43.118570  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1206 10:11:43.130402  489065 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1206 10:11:43.130467  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1206 10:11:43.326561  489065 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1206 10:11:43.326630  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1206 10:11:43.336584  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 10:11:43.575107  489065 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.024969498s)
	I1206 10:11:43.575210  489065 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.025244325s)
	I1206 10:11:43.575373  489065 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1206 10:11:43.576015  489065 node_ready.go:35] waiting up to 6m0s for node "addons-463201" to be "Ready" ...
	I1206 10:11:43.579602  489065 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 10:11:43.579664  489065 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1206 10:11:43.832835  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 10:11:44.079560  489065 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-463201" context rescaled to 1 replicas
	W1206 10:11:45.605128  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:11:46.818821  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.031236956s)
	I1206 10:11:46.818894  489065 addons.go:495] Verifying addon ingress=true in "addons-463201"
	I1206 10:11:46.819112  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.970868664s)
	I1206 10:11:46.819267  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.965454968s)
	I1206 10:11:46.819305  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.949942615s)
	I1206 10:11:46.819367  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.934679644s)
	I1206 10:11:46.819411  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.907103997s)
	I1206 10:11:46.819430  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.894848141s)
	I1206 10:11:46.819469  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.852847309s)
	I1206 10:11:46.819502  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.823808408s)
	I1206 10:11:46.819516  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.765619438s)
	I1206 10:11:46.819569  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.240013928s)
	I1206 10:11:46.820094  489065 addons.go:495] Verifying addon metrics-server=true in "addons-463201"
	I1206 10:11:46.819591  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.083091223s)
	I1206 10:11:46.820112  489065 addons.go:495] Verifying addon registry=true in "addons-463201"
	I1206 10:11:46.819617  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.079999376s)
	I1206 10:11:46.819704  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.483049521s)
	W1206 10:11:46.821492  489065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 10:11:46.821522  489065 retry.go:31] will retry after 143.470615ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 10:11:46.822202  489065 out.go:179] * Verifying ingress addon...
	I1206 10:11:46.824424  489065 out.go:179] * Verifying registry addon...
	I1206 10:11:46.826387  489065 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-463201 service yakd-dashboard -n yakd-dashboard
	
	I1206 10:11:46.827319  489065 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1206 10:11:46.830339  489065 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1206 10:11:46.846963  489065 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1206 10:11:46.848829  489065 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 10:11:46.848854  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:46.849259  489065 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 10:11:46.849277  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:46.966115  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 10:11:47.192012  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.35911936s)
	I1206 10:11:47.192046  489065 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-463201"
	I1206 10:11:47.195149  489065 out.go:179] * Verifying csi-hostpath-driver addon...
	I1206 10:11:47.198420  489065 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1206 10:11:47.206795  489065 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 10:11:47.206824  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:47.333110  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:47.334302  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:47.702527  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:47.831241  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:47.833596  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1206 10:11:48.079179  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:11:48.202514  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:48.330757  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:48.333275  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:48.592467  489065 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1206 10:11:48.592555  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:48.609216  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:48.701952  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:48.719837  489065 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1206 10:11:48.733668  489065 addons.go:239] Setting addon gcp-auth=true in "addons-463201"
	I1206 10:11:48.733721  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:48.734161  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:48.750646  489065 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1206 10:11:48.750708  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:48.767537  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:48.830374  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:48.832582  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:49.202057  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:49.330994  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:49.333081  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:49.702365  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:49.801938  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.83577556s)
	I1206 10:11:49.802033  489065 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.051360258s)
	I1206 10:11:49.805291  489065 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1206 10:11:49.808263  489065 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1206 10:11:49.811042  489065 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1206 10:11:49.811061  489065 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1206 10:11:49.824250  489065 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1206 10:11:49.824318  489065 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1206 10:11:49.830216  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:49.833748  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:49.840811  489065 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 10:11:49.840831  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1206 10:11:49.853849  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	W1206 10:11:50.079601  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:11:50.202415  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:50.348669  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:50.349338  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:50.377136  489065 addons.go:495] Verifying addon gcp-auth=true in "addons-463201"
	I1206 10:11:50.382357  489065 out.go:179] * Verifying gcp-auth addon...
	I1206 10:11:50.386697  489065 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1206 10:11:50.453002  489065 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1206 10:11:50.453028  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:50.701918  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:50.831070  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:50.833497  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:50.890658  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:51.202080  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:51.331152  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:51.333033  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:51.389989  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:51.701288  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:51.830475  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:51.833040  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:51.889594  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:11:52.079753  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:11:52.202421  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:52.330374  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:52.332731  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:52.390301  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:52.702089  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:52.831069  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:52.832927  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:52.889908  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:53.202168  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:53.331005  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:53.332780  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:53.389721  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:53.702178  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:53.831985  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:53.834749  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:53.890261  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:54.201654  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:54.330695  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:54.332756  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:54.389673  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:11:54.579806  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:11:54.701441  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:54.830548  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:54.832866  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:54.890108  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:55.201647  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:55.330688  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:55.333938  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:55.389902  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:55.701957  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:55.830504  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:55.832968  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:55.889848  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:56.202578  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:56.330515  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:56.333039  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:56.390093  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:56.702200  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:56.830308  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:56.832819  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:56.889623  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:11:57.079448  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:11:57.201512  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:57.330462  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:57.332802  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:57.390474  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:57.702005  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:57.831239  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:57.833362  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:57.890220  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:58.202385  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:58.331202  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:58.333128  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:58.389805  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:58.702159  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:58.832077  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:58.833555  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:58.890373  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:59.202214  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:59.330306  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:59.334659  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:59.390215  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:11:59.579558  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:11:59.701817  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:59.831412  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:59.833465  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:59.890249  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:00.219146  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:00.350869  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:00.361224  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:00.390977  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:00.702063  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:00.831453  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:00.833810  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:00.890996  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:01.201830  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:01.331585  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:01.333759  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:01.390629  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:12:01.579623  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:12:01.702645  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:01.831705  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:01.833782  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:01.890107  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:02.201937  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:02.331109  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:02.333584  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:02.390329  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:02.702181  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:02.831092  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:02.833396  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:02.890397  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:03.202038  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:03.331350  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:03.333366  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:03.390236  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:03.702035  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:03.831242  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:03.833283  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:03.889980  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:12:04.079972  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:12:04.202583  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:04.330460  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:04.332804  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:04.389567  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:04.702334  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:04.830482  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:04.833221  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:04.890008  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:05.201679  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:05.330994  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:05.333564  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:05.390379  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:05.701662  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:05.830702  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:05.832689  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:05.889647  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:06.202213  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:06.330108  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:06.333726  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:06.390811  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:12:06.579525  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:12:06.701770  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:06.830852  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:06.832719  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:06.890402  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:07.202685  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:07.330979  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:07.333372  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:07.390258  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:07.702143  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:07.831269  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:07.833201  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:07.890030  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:08.202394  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:08.332072  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:08.333540  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:08.390631  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:08.701586  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:08.830251  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:08.833869  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:08.889824  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:12:09.079765  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:12:09.202251  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:09.334812  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:09.335063  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:09.389988  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:09.701715  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:09.830877  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:09.832935  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:09.890209  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:10.203110  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:10.331771  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:10.333069  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:10.390525  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:10.702257  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:10.830980  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:10.833108  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:10.889825  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:11.202243  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:11.331334  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:11.333308  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:11.390125  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:12:11.579429  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:12:11.701641  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:11.830769  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:11.832930  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:11.908458  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:12.201499  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:12.331069  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:12.333206  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:12.389970  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:12.701636  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:12.830897  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:12.832947  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:12.889943  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:13.202185  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:13.331222  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:13.333479  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:13.390189  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:13.702365  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:13.832649  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:13.833925  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:13.890003  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:12:14.079075  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:12:14.202861  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:14.331259  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:14.333254  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:14.390280  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:14.701945  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:14.831268  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:14.833616  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:14.890308  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:15.201966  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:15.331642  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:15.334163  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:15.390004  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:15.701820  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:15.831722  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:15.833770  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:15.890493  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:16.202387  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:16.332040  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:16.333097  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:16.390092  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:12:16.580085  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:12:16.701211  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:16.830208  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:16.833694  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:16.890416  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:17.202050  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:17.330625  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:17.333157  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:17.389864  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:17.701776  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:17.830990  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:17.834110  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:17.889693  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:18.201261  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:18.330427  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:18.332804  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:18.389774  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:18.701315  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:18.835337  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:18.835592  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:18.890206  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:12:19.078805  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:12:19.201707  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:19.330720  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:19.333157  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:19.389920  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:19.701856  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:19.830751  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:19.833009  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:19.889717  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:20.201686  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:20.330681  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:20.333186  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:20.389783  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:20.702164  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:20.832369  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:20.833461  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:20.890882  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:12:21.079673  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:12:21.201563  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:21.330513  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:21.332954  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:21.457977  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:21.598913  489065 node_ready.go:49] node "addons-463201" is "Ready"
	I1206 10:12:21.598940  489065 node_ready.go:38] duration metric: took 38.022873011s for node "addons-463201" to be "Ready" ...
	I1206 10:12:21.598954  489065 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:12:21.599016  489065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:21.616887  489065 api_server.go:72] duration metric: took 40.999680017s to wait for apiserver process to appear ...
	I1206 10:12:21.616915  489065 api_server.go:88] waiting for apiserver healthz status ...
	I1206 10:12:21.616934  489065 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1206 10:12:21.670507  489065 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1206 10:12:21.677007  489065 api_server.go:141] control plane version: v1.34.2
	I1206 10:12:21.677044  489065 api_server.go:131] duration metric: took 60.121433ms to wait for apiserver health ...
	I1206 10:12:21.677054  489065 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 10:12:21.717457  489065 system_pods.go:59] 19 kube-system pods found
	I1206 10:12:21.717500  489065 system_pods.go:61] "coredns-66bc5c9577-lpwwm" [7c7fd403-5d4d-464f-be21-f2adaba02970] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:12:21.717509  489065 system_pods.go:61] "csi-hostpath-attacher-0" [860f7570-9179-42b1-9516-27cbdd541cba] Pending
	I1206 10:12:21.717517  489065 system_pods.go:61] "csi-hostpath-resizer-0" [d2a6d809-ef2c-42fa-8c97-cce95e85c55e] Pending
	I1206 10:12:21.717521  489065 system_pods.go:61] "csi-hostpathplugin-c44tb" [57f40587-ab37-420e-bdc7-f93eed11c211] Pending
	I1206 10:12:21.717525  489065 system_pods.go:61] "etcd-addons-463201" [cdae2571-6b1e-4bf9-bc32-60fa5176e6a6] Running
	I1206 10:12:21.717528  489065 system_pods.go:61] "kindnet-f7fln" [7fc353f6-b054-4d10-bd16-a8a46177ef2f] Running
	I1206 10:12:21.717532  489065 system_pods.go:61] "kube-apiserver-addons-463201" [7a4ba8c7-b422-4056-a002-4acb74537151] Running
	I1206 10:12:21.717536  489065 system_pods.go:61] "kube-controller-manager-addons-463201" [001b2acc-5d95-4797-8460-891ff2f1a386] Running
	I1206 10:12:21.717539  489065 system_pods.go:61] "kube-ingress-dns-minikube" [365b0274-8935-4681-8e0e-74d4f8960974] Pending
	I1206 10:12:21.717543  489065 system_pods.go:61] "kube-proxy-c7kr8" [68172874-485a-40ab-9e33-a3022f356326] Running
	I1206 10:12:21.717547  489065 system_pods.go:61] "kube-scheduler-addons-463201" [53ca2bb3-ad02-4622-bf46-a2ba0033429f] Running
	I1206 10:12:21.717554  489065 system_pods.go:61] "metrics-server-85b7d694d7-ghlgl" [e2a1839a-8335-4ab7-9136-7a3bb928ad38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 10:12:21.717558  489065 system_pods.go:61] "nvidia-device-plugin-daemonset-wq978" [1859f905-34d7-4140-9bb8-ef61dd8223d4] Pending
	I1206 10:12:21.717562  489065 system_pods.go:61] "registry-6b586f9694-bq87w" [63389ffd-8b62-4e37-9aa3-c3b441f33313] Pending
	I1206 10:12:21.717566  489065 system_pods.go:61] "registry-creds-764b6fb674-d82zs" [58e598f1-3fd2-4d98-a425-39e32abce39a] Pending
	I1206 10:12:21.717574  489065 system_pods.go:61] "registry-proxy-k4pz5" [ea8be87c-e08c-491b-94f0-c370941e4d8e] Pending
	I1206 10:12:21.717578  489065 system_pods.go:61] "snapshot-controller-7d9fbc56b8-b9lfs" [fbec39af-e2ab-44fc-b934-ef0e29d2b42b] Pending
	I1206 10:12:21.717581  489065 system_pods.go:61] "snapshot-controller-7d9fbc56b8-c9xc4" [6836f5df-cdbf-4d10-9f86-e21855d2d435] Pending
	I1206 10:12:21.717585  489065 system_pods.go:61] "storage-provisioner" [34baa5b4-fa81-4ae5-a42a-f3f7cb366a32] Pending
	I1206 10:12:21.717590  489065 system_pods.go:74] duration metric: took 40.531084ms to wait for pod list to return data ...
	I1206 10:12:21.717603  489065 default_sa.go:34] waiting for default service account to be created ...
	I1206 10:12:21.723056  489065 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 10:12:21.723081  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:21.793176  489065 default_sa.go:45] found service account: "default"
	I1206 10:12:21.793205  489065 default_sa.go:55] duration metric: took 75.59451ms for default service account to be created ...
	I1206 10:12:21.793216  489065 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 10:12:21.820378  489065 system_pods.go:86] 19 kube-system pods found
	I1206 10:12:21.820417  489065 system_pods.go:89] "coredns-66bc5c9577-lpwwm" [7c7fd403-5d4d-464f-be21-f2adaba02970] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:12:21.820427  489065 system_pods.go:89] "csi-hostpath-attacher-0" [860f7570-9179-42b1-9516-27cbdd541cba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:12:21.820432  489065 system_pods.go:89] "csi-hostpath-resizer-0" [d2a6d809-ef2c-42fa-8c97-cce95e85c55e] Pending
	I1206 10:12:21.820436  489065 system_pods.go:89] "csi-hostpathplugin-c44tb" [57f40587-ab37-420e-bdc7-f93eed11c211] Pending
	I1206 10:12:21.820442  489065 system_pods.go:89] "etcd-addons-463201" [cdae2571-6b1e-4bf9-bc32-60fa5176e6a6] Running
	I1206 10:12:21.820447  489065 system_pods.go:89] "kindnet-f7fln" [7fc353f6-b054-4d10-bd16-a8a46177ef2f] Running
	I1206 10:12:21.820451  489065 system_pods.go:89] "kube-apiserver-addons-463201" [7a4ba8c7-b422-4056-a002-4acb74537151] Running
	I1206 10:12:21.820455  489065 system_pods.go:89] "kube-controller-manager-addons-463201" [001b2acc-5d95-4797-8460-891ff2f1a386] Running
	I1206 10:12:21.820459  489065 system_pods.go:89] "kube-ingress-dns-minikube" [365b0274-8935-4681-8e0e-74d4f8960974] Pending
	I1206 10:12:21.820469  489065 system_pods.go:89] "kube-proxy-c7kr8" [68172874-485a-40ab-9e33-a3022f356326] Running
	I1206 10:12:21.820473  489065 system_pods.go:89] "kube-scheduler-addons-463201" [53ca2bb3-ad02-4622-bf46-a2ba0033429f] Running
	I1206 10:12:21.820482  489065 system_pods.go:89] "metrics-server-85b7d694d7-ghlgl" [e2a1839a-8335-4ab7-9136-7a3bb928ad38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 10:12:21.820486  489065 system_pods.go:89] "nvidia-device-plugin-daemonset-wq978" [1859f905-34d7-4140-9bb8-ef61dd8223d4] Pending
	I1206 10:12:21.820506  489065 system_pods.go:89] "registry-6b586f9694-bq87w" [63389ffd-8b62-4e37-9aa3-c3b441f33313] Pending
	I1206 10:12:21.820510  489065 system_pods.go:89] "registry-creds-764b6fb674-d82zs" [58e598f1-3fd2-4d98-a425-39e32abce39a] Pending
	I1206 10:12:21.820513  489065 system_pods.go:89] "registry-proxy-k4pz5" [ea8be87c-e08c-491b-94f0-c370941e4d8e] Pending
	I1206 10:12:21.820517  489065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b9lfs" [fbec39af-e2ab-44fc-b934-ef0e29d2b42b] Pending
	I1206 10:12:21.820521  489065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c9xc4" [6836f5df-cdbf-4d10-9f86-e21855d2d435] Pending
	I1206 10:12:21.820527  489065 system_pods.go:89] "storage-provisioner" [34baa5b4-fa81-4ae5-a42a-f3f7cb366a32] Pending
	I1206 10:12:21.820550  489065 retry.go:31] will retry after 261.024902ms: missing components: kube-dns
	I1206 10:12:21.871814  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:21.872147  489065 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 10:12:21.872165  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:21.904284  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:22.102588  489065 system_pods.go:86] 19 kube-system pods found
	I1206 10:12:22.102631  489065 system_pods.go:89] "coredns-66bc5c9577-lpwwm" [7c7fd403-5d4d-464f-be21-f2adaba02970] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:12:22.102641  489065 system_pods.go:89] "csi-hostpath-attacher-0" [860f7570-9179-42b1-9516-27cbdd541cba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:12:22.102648  489065 system_pods.go:89] "csi-hostpath-resizer-0" [d2a6d809-ef2c-42fa-8c97-cce95e85c55e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 10:12:22.102653  489065 system_pods.go:89] "csi-hostpathplugin-c44tb" [57f40587-ab37-420e-bdc7-f93eed11c211] Pending
	I1206 10:12:22.102657  489065 system_pods.go:89] "etcd-addons-463201" [cdae2571-6b1e-4bf9-bc32-60fa5176e6a6] Running
	I1206 10:12:22.102663  489065 system_pods.go:89] "kindnet-f7fln" [7fc353f6-b054-4d10-bd16-a8a46177ef2f] Running
	I1206 10:12:22.102667  489065 system_pods.go:89] "kube-apiserver-addons-463201" [7a4ba8c7-b422-4056-a002-4acb74537151] Running
	I1206 10:12:22.102672  489065 system_pods.go:89] "kube-controller-manager-addons-463201" [001b2acc-5d95-4797-8460-891ff2f1a386] Running
	I1206 10:12:22.102678  489065 system_pods.go:89] "kube-ingress-dns-minikube" [365b0274-8935-4681-8e0e-74d4f8960974] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 10:12:22.102684  489065 system_pods.go:89] "kube-proxy-c7kr8" [68172874-485a-40ab-9e33-a3022f356326] Running
	I1206 10:12:22.102692  489065 system_pods.go:89] "kube-scheduler-addons-463201" [53ca2bb3-ad02-4622-bf46-a2ba0033429f] Running
	I1206 10:12:22.102698  489065 system_pods.go:89] "metrics-server-85b7d694d7-ghlgl" [e2a1839a-8335-4ab7-9136-7a3bb928ad38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 10:12:22.102708  489065 system_pods.go:89] "nvidia-device-plugin-daemonset-wq978" [1859f905-34d7-4140-9bb8-ef61dd8223d4] Pending
	I1206 10:12:22.102715  489065 system_pods.go:89] "registry-6b586f9694-bq87w" [63389ffd-8b62-4e37-9aa3-c3b441f33313] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 10:12:22.102720  489065 system_pods.go:89] "registry-creds-764b6fb674-d82zs" [58e598f1-3fd2-4d98-a425-39e32abce39a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:12:22.102728  489065 system_pods.go:89] "registry-proxy-k4pz5" [ea8be87c-e08c-491b-94f0-c370941e4d8e] Pending
	I1206 10:12:22.102733  489065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b9lfs" [fbec39af-e2ab-44fc-b934-ef0e29d2b42b] Pending
	I1206 10:12:22.102740  489065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c9xc4" [6836f5df-cdbf-4d10-9f86-e21855d2d435] Pending
	I1206 10:12:22.102751  489065 system_pods.go:89] "storage-provisioner" [34baa5b4-fa81-4ae5-a42a-f3f7cb366a32] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:12:22.102767  489065 retry.go:31] will retry after 358.310574ms: missing components: kube-dns
	I1206 10:12:22.220233  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:22.331343  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:22.336692  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:22.401371  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:22.471048  489065 system_pods.go:86] 19 kube-system pods found
	I1206 10:12:22.471197  489065 system_pods.go:89] "coredns-66bc5c9577-lpwwm" [7c7fd403-5d4d-464f-be21-f2adaba02970] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:12:22.471230  489065 system_pods.go:89] "csi-hostpath-attacher-0" [860f7570-9179-42b1-9516-27cbdd541cba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:12:22.471253  489065 system_pods.go:89] "csi-hostpath-resizer-0" [d2a6d809-ef2c-42fa-8c97-cce95e85c55e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 10:12:22.471289  489065 system_pods.go:89] "csi-hostpathplugin-c44tb" [57f40587-ab37-420e-bdc7-f93eed11c211] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 10:12:22.471314  489065 system_pods.go:89] "etcd-addons-463201" [cdae2571-6b1e-4bf9-bc32-60fa5176e6a6] Running
	I1206 10:12:22.471335  489065 system_pods.go:89] "kindnet-f7fln" [7fc353f6-b054-4d10-bd16-a8a46177ef2f] Running
	I1206 10:12:22.471371  489065 system_pods.go:89] "kube-apiserver-addons-463201" [7a4ba8c7-b422-4056-a002-4acb74537151] Running
	I1206 10:12:22.471396  489065 system_pods.go:89] "kube-controller-manager-addons-463201" [001b2acc-5d95-4797-8460-891ff2f1a386] Running
	I1206 10:12:22.471417  489065 system_pods.go:89] "kube-ingress-dns-minikube" [365b0274-8935-4681-8e0e-74d4f8960974] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 10:12:22.471451  489065 system_pods.go:89] "kube-proxy-c7kr8" [68172874-485a-40ab-9e33-a3022f356326] Running
	I1206 10:12:22.471475  489065 system_pods.go:89] "kube-scheduler-addons-463201" [53ca2bb3-ad02-4622-bf46-a2ba0033429f] Running
	I1206 10:12:22.471495  489065 system_pods.go:89] "metrics-server-85b7d694d7-ghlgl" [e2a1839a-8335-4ab7-9136-7a3bb928ad38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 10:12:22.471529  489065 system_pods.go:89] "nvidia-device-plugin-daemonset-wq978" [1859f905-34d7-4140-9bb8-ef61dd8223d4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 10:12:22.471555  489065 system_pods.go:89] "registry-6b586f9694-bq87w" [63389ffd-8b62-4e37-9aa3-c3b441f33313] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 10:12:22.471575  489065 system_pods.go:89] "registry-creds-764b6fb674-d82zs" [58e598f1-3fd2-4d98-a425-39e32abce39a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:12:22.471611  489065 system_pods.go:89] "registry-proxy-k4pz5" [ea8be87c-e08c-491b-94f0-c370941e4d8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 10:12:22.471638  489065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b9lfs" [fbec39af-e2ab-44fc-b934-ef0e29d2b42b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:12:22.471659  489065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c9xc4" [6836f5df-cdbf-4d10-9f86-e21855d2d435] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:12:22.471694  489065 system_pods.go:89] "storage-provisioner" [34baa5b4-fa81-4ae5-a42a-f3f7cb366a32] Running
	I1206 10:12:22.471733  489065 retry.go:31] will retry after 423.87765ms: missing components: kube-dns
	I1206 10:12:22.702646  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:22.831478  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:22.834247  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:22.890228  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:22.900914  489065 system_pods.go:86] 19 kube-system pods found
	I1206 10:12:22.900963  489065 system_pods.go:89] "coredns-66bc5c9577-lpwwm" [7c7fd403-5d4d-464f-be21-f2adaba02970] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:12:22.900973  489065 system_pods.go:89] "csi-hostpath-attacher-0" [860f7570-9179-42b1-9516-27cbdd541cba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:12:22.900981  489065 system_pods.go:89] "csi-hostpath-resizer-0" [d2a6d809-ef2c-42fa-8c97-cce95e85c55e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 10:12:22.900988  489065 system_pods.go:89] "csi-hostpathplugin-c44tb" [57f40587-ab37-420e-bdc7-f93eed11c211] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 10:12:22.900994  489065 system_pods.go:89] "etcd-addons-463201" [cdae2571-6b1e-4bf9-bc32-60fa5176e6a6] Running
	I1206 10:12:22.901000  489065 system_pods.go:89] "kindnet-f7fln" [7fc353f6-b054-4d10-bd16-a8a46177ef2f] Running
	I1206 10:12:22.901005  489065 system_pods.go:89] "kube-apiserver-addons-463201" [7a4ba8c7-b422-4056-a002-4acb74537151] Running
	I1206 10:12:22.901010  489065 system_pods.go:89] "kube-controller-manager-addons-463201" [001b2acc-5d95-4797-8460-891ff2f1a386] Running
	I1206 10:12:22.901019  489065 system_pods.go:89] "kube-ingress-dns-minikube" [365b0274-8935-4681-8e0e-74d4f8960974] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 10:12:22.901024  489065 system_pods.go:89] "kube-proxy-c7kr8" [68172874-485a-40ab-9e33-a3022f356326] Running
	I1206 10:12:22.901036  489065 system_pods.go:89] "kube-scheduler-addons-463201" [53ca2bb3-ad02-4622-bf46-a2ba0033429f] Running
	I1206 10:12:22.901042  489065 system_pods.go:89] "metrics-server-85b7d694d7-ghlgl" [e2a1839a-8335-4ab7-9136-7a3bb928ad38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 10:12:22.901051  489065 system_pods.go:89] "nvidia-device-plugin-daemonset-wq978" [1859f905-34d7-4140-9bb8-ef61dd8223d4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 10:12:22.901065  489065 system_pods.go:89] "registry-6b586f9694-bq87w" [63389ffd-8b62-4e37-9aa3-c3b441f33313] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 10:12:22.901072  489065 system_pods.go:89] "registry-creds-764b6fb674-d82zs" [58e598f1-3fd2-4d98-a425-39e32abce39a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:12:22.901083  489065 system_pods.go:89] "registry-proxy-k4pz5" [ea8be87c-e08c-491b-94f0-c370941e4d8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 10:12:22.901089  489065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b9lfs" [fbec39af-e2ab-44fc-b934-ef0e29d2b42b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:12:22.901096  489065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c9xc4" [6836f5df-cdbf-4d10-9f86-e21855d2d435] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:12:22.901102  489065 system_pods.go:89] "storage-provisioner" [34baa5b4-fa81-4ae5-a42a-f3f7cb366a32] Running
	I1206 10:12:22.901118  489065 retry.go:31] will retry after 550.206772ms: missing components: kube-dns
	I1206 10:12:23.203284  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:23.331972  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:23.334173  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:23.421220  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:23.470037  489065 system_pods.go:86] 19 kube-system pods found
	I1206 10:12:23.470067  489065 system_pods.go:89] "coredns-66bc5c9577-lpwwm" [7c7fd403-5d4d-464f-be21-f2adaba02970] Running
	I1206 10:12:23.470077  489065 system_pods.go:89] "csi-hostpath-attacher-0" [860f7570-9179-42b1-9516-27cbdd541cba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:12:23.470084  489065 system_pods.go:89] "csi-hostpath-resizer-0" [d2a6d809-ef2c-42fa-8c97-cce95e85c55e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 10:12:23.470093  489065 system_pods.go:89] "csi-hostpathplugin-c44tb" [57f40587-ab37-420e-bdc7-f93eed11c211] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 10:12:23.470098  489065 system_pods.go:89] "etcd-addons-463201" [cdae2571-6b1e-4bf9-bc32-60fa5176e6a6] Running
	I1206 10:12:23.470102  489065 system_pods.go:89] "kindnet-f7fln" [7fc353f6-b054-4d10-bd16-a8a46177ef2f] Running
	I1206 10:12:23.470106  489065 system_pods.go:89] "kube-apiserver-addons-463201" [7a4ba8c7-b422-4056-a002-4acb74537151] Running
	I1206 10:12:23.470111  489065 system_pods.go:89] "kube-controller-manager-addons-463201" [001b2acc-5d95-4797-8460-891ff2f1a386] Running
	I1206 10:12:23.470117  489065 system_pods.go:89] "kube-ingress-dns-minikube" [365b0274-8935-4681-8e0e-74d4f8960974] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 10:12:23.470124  489065 system_pods.go:89] "kube-proxy-c7kr8" [68172874-485a-40ab-9e33-a3022f356326] Running
	I1206 10:12:23.470128  489065 system_pods.go:89] "kube-scheduler-addons-463201" [53ca2bb3-ad02-4622-bf46-a2ba0033429f] Running
	I1206 10:12:23.470137  489065 system_pods.go:89] "metrics-server-85b7d694d7-ghlgl" [e2a1839a-8335-4ab7-9136-7a3bb928ad38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 10:12:23.470143  489065 system_pods.go:89] "nvidia-device-plugin-daemonset-wq978" [1859f905-34d7-4140-9bb8-ef61dd8223d4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 10:12:23.470159  489065 system_pods.go:89] "registry-6b586f9694-bq87w" [63389ffd-8b62-4e37-9aa3-c3b441f33313] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 10:12:23.470165  489065 system_pods.go:89] "registry-creds-764b6fb674-d82zs" [58e598f1-3fd2-4d98-a425-39e32abce39a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:12:23.470177  489065 system_pods.go:89] "registry-proxy-k4pz5" [ea8be87c-e08c-491b-94f0-c370941e4d8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 10:12:23.470184  489065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b9lfs" [fbec39af-e2ab-44fc-b934-ef0e29d2b42b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:12:23.470196  489065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c9xc4" [6836f5df-cdbf-4d10-9f86-e21855d2d435] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:12:23.470200  489065 system_pods.go:89] "storage-provisioner" [34baa5b4-fa81-4ae5-a42a-f3f7cb366a32] Running
	I1206 10:12:23.470209  489065 system_pods.go:126] duration metric: took 1.67698694s to wait for k8s-apps to be running ...
	I1206 10:12:23.470219  489065 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 10:12:23.470274  489065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:12:23.497214  489065 system_svc.go:56] duration metric: took 26.984932ms WaitForService to wait for kubelet
	I1206 10:12:23.497243  489065 kubeadm.go:587] duration metric: took 42.880039886s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:12:23.497262  489065 node_conditions.go:102] verifying NodePressure condition ...
	I1206 10:12:23.500003  489065 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1206 10:12:23.500033  489065 node_conditions.go:123] node cpu capacity is 2
	I1206 10:12:23.500047  489065 node_conditions.go:105] duration metric: took 2.77967ms to run NodePressure ...
	I1206 10:12:23.500060  489065 start.go:242] waiting for startup goroutines ...
	I1206 10:12:23.703664  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:23.831412  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:23.834812  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:23.890271  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:24.202077  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:24.332651  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:24.335509  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:24.390697  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:24.702946  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:24.831018  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:24.833854  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:24.889644  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:25.203562  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:25.331624  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:25.334476  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:25.390395  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:25.702839  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:25.834466  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:25.839033  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:25.889969  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:26.203180  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:26.332675  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:26.334000  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:26.389710  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:26.702200  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:26.831582  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:26.834391  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:26.890417  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:27.210095  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:27.333956  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:27.335094  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:27.389867  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:27.702365  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:27.830569  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:27.833019  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:27.889735  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:28.202004  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:28.331523  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:28.333506  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:28.390603  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:28.702489  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:28.830580  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:28.833224  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:28.890218  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:29.203006  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:29.334059  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:29.334314  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:29.390276  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:29.702078  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:29.831905  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:29.833924  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:29.890022  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:30.202815  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:30.331892  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:30.334455  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:30.390266  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:30.701693  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:30.833408  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:30.833958  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:30.889771  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:31.202160  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:31.331891  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:31.333778  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:31.389614  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:31.702424  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:31.830896  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:31.833973  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:31.890702  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:32.202447  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:32.338674  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:32.339168  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:32.390837  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:32.702713  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:32.831493  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:32.834351  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:32.890707  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:33.203816  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:33.331431  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:33.334120  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:33.390096  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:33.702851  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:33.832304  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:33.834706  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:33.890439  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:34.203501  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:34.330987  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:34.333396  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:34.391939  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:34.706792  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:34.830754  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:34.833514  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:34.892120  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:35.224343  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:35.331834  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:35.334309  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:35.390702  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:35.702354  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:35.831078  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:35.833272  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:35.890505  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:36.202318  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:36.332531  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:36.334000  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:36.432838  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:36.703043  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:36.831301  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:36.834113  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:36.890555  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:37.205245  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:37.332244  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:37.334885  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:37.432996  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:37.703602  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:37.831765  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:37.833856  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:37.889738  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:38.204336  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:38.330317  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:38.333879  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:38.390772  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:38.703048  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:38.832589  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:38.834260  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:38.890777  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:39.202567  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:39.330686  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:39.333641  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:39.393196  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:39.702217  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:39.830270  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:39.833960  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:39.890136  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:40.203372  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:40.332323  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:40.334022  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:40.390971  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:40.702829  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:40.831116  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:40.833874  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:40.890167  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:41.202087  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:41.331490  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:41.333580  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:41.390629  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:41.702002  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:41.831358  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:41.833967  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:41.890008  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:42.202743  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:42.330715  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:42.333843  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:42.390715  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:42.702310  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:42.833198  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:42.834644  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:42.891619  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:43.203608  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:43.331840  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:43.335164  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:43.390632  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:43.704113  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:43.832001  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:43.834824  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:43.890579  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:44.203441  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:44.332596  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:44.334646  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:44.389964  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:44.703007  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:44.832588  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:44.834480  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:44.890833  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:45.204537  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:45.331792  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:45.335779  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:45.390443  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:45.702403  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:45.830893  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:45.833297  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:45.890566  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:46.204514  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:46.331284  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:46.334629  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:46.389511  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:46.702438  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:46.830773  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:46.833258  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:46.890168  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:47.202610  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:47.332960  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:47.334153  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:47.390181  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:47.702023  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:47.831594  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:47.834087  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:47.890646  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:48.202225  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:48.330701  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:48.333578  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:48.390180  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:48.703410  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:48.832490  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:48.838714  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:48.890921  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:49.202825  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:49.331603  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:49.334742  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:49.390083  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:49.705413  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:49.832187  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:49.837545  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:49.895603  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:50.214232  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:50.336965  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:50.337295  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:50.433023  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:50.705588  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:50.831406  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:50.837416  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:50.890759  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:51.203022  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:51.331284  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:51.333754  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:51.393236  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:51.710149  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:51.832532  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:51.833667  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:51.890633  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:52.203596  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:52.331051  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:52.333050  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:52.390126  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:52.703485  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:52.831477  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:52.833495  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:52.890715  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:53.202421  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:53.330588  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:53.333533  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:53.390542  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:53.703683  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:53.831330  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:53.833416  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:53.890658  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:54.202245  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:54.333622  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:54.334873  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:54.391077  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:54.702535  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:54.830543  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:54.833128  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:54.890011  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:55.202705  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:55.330828  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:55.333664  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:55.390293  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:55.702228  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:55.830428  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:55.833777  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:55.891291  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:56.201799  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:56.330851  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:56.333282  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:56.390242  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:56.702236  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:56.831609  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:56.834295  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:56.890404  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:57.202046  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:57.331073  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:57.333125  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:57.390222  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:57.701482  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:57.832748  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:57.834593  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:57.891606  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:58.202480  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:58.331496  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:58.333913  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:58.389823  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:58.710675  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:58.831401  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:58.833531  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:58.890454  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:59.202191  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:59.331177  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:59.333708  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:59.389592  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:59.701986  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:59.831433  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:59.834579  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:59.891265  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:00.244666  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:00.334977  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:00.337140  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:13:00.390512  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:00.713936  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:00.835053  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:00.835328  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:13:00.890905  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:01.202869  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:01.331454  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:01.333691  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:13:01.390094  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:01.701768  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:01.831198  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:01.833710  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:13:01.889638  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:02.202380  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:02.332397  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:02.334525  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:13:02.390294  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:02.702126  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:02.834668  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:02.838289  489065 kapi.go:107] duration metric: took 1m16.007948253s to wait for kubernetes.io/minikube-addons=registry ...
	I1206 10:13:02.931804  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:03.202686  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:03.330972  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:03.390029  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:03.702893  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:03.830882  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:03.889710  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:04.202399  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:04.330502  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:04.390513  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:04.702870  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:04.831690  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:04.890146  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:05.202058  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:05.331252  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:05.390132  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:05.703495  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:05.831219  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:05.890399  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:06.202234  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:06.331839  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:06.390130  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:06.703787  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:06.835650  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:06.891081  489065 kapi.go:107] duration metric: took 1m16.504383293s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1206 10:13:06.895727  489065 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-463201 cluster.
	I1206 10:13:06.898830  489065 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1206 10:13:06.901848  489065 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1206 10:13:07.202130  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:07.331021  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:07.703612  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:07.831186  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:08.203390  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:08.330725  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:08.702966  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:08.830958  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:09.202501  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:09.330353  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:09.702898  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:09.836878  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:10.202258  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:10.330717  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:10.706056  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:10.831972  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:11.202695  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:11.338893  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:11.703410  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:11.830725  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:12.202094  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:12.330690  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:12.709486  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:12.833606  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:13.203250  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:13.337717  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:13.702664  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:13.830900  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:14.203102  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:14.331493  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:14.702981  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:14.831829  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:15.205633  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:15.330560  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:15.702595  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:15.831436  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:16.202006  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:16.330657  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:16.713026  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:16.831002  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:17.202518  489065 kapi.go:107] duration metric: took 1m30.004099333s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1206 10:13:17.330968  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:17.830978  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:18.330274  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:18.830621  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:19.330828  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:19.831191  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:20.330695  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:20.831933  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:21.330425  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:21.830978  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:22.331795  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:22.831844  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:23.331063  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:23.832006  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:24.330861  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:24.830675  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:25.331336  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:25.831332  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:26.331065  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:26.831352  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:27.331685  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:27.831894  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:28.331209  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:28.830707  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:29.331680  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:29.831903  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:30.331750  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:30.831700  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:31.331139  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:31.831077  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:32.332130  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:32.831693  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:33.331263  489065 kapi.go:107] duration metric: took 1m46.503942806s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1206 10:13:33.334870  489065 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, ingress-dns, inspektor-gadget, registry-creds, cloud-spanner, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1206 10:13:33.337801  489065 addons.go:530] duration metric: took 1m52.720177597s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin ingress-dns inspektor-gadget registry-creds cloud-spanner storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1206 10:13:33.337866  489065 start.go:247] waiting for cluster config update ...
	I1206 10:13:33.337890  489065 start.go:256] writing updated cluster config ...
	I1206 10:13:33.338210  489065 ssh_runner.go:195] Run: rm -f paused
	I1206 10:13:33.343804  489065 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 10:13:33.347280  489065 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lpwwm" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:33.352431  489065 pod_ready.go:94] pod "coredns-66bc5c9577-lpwwm" is "Ready"
	I1206 10:13:33.352462  489065 pod_ready.go:86] duration metric: took 5.157489ms for pod "coredns-66bc5c9577-lpwwm" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:33.354582  489065 pod_ready.go:83] waiting for pod "etcd-addons-463201" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:33.359037  489065 pod_ready.go:94] pod "etcd-addons-463201" is "Ready"
	I1206 10:13:33.359064  489065 pod_ready.go:86] duration metric: took 4.454092ms for pod "etcd-addons-463201" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:33.361590  489065 pod_ready.go:83] waiting for pod "kube-apiserver-addons-463201" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:33.366289  489065 pod_ready.go:94] pod "kube-apiserver-addons-463201" is "Ready"
	I1206 10:13:33.366316  489065 pod_ready.go:86] duration metric: took 4.697985ms for pod "kube-apiserver-addons-463201" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:33.368942  489065 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-463201" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:33.747791  489065 pod_ready.go:94] pod "kube-controller-manager-addons-463201" is "Ready"
	I1206 10:13:33.747820  489065 pod_ready.go:86] duration metric: took 378.848099ms for pod "kube-controller-manager-addons-463201" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:33.948413  489065 pod_ready.go:83] waiting for pod "kube-proxy-c7kr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:34.347627  489065 pod_ready.go:94] pod "kube-proxy-c7kr8" is "Ready"
	I1206 10:13:34.347660  489065 pod_ready.go:86] duration metric: took 399.220119ms for pod "kube-proxy-c7kr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:34.548201  489065 pod_ready.go:83] waiting for pod "kube-scheduler-addons-463201" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:34.952087  489065 pod_ready.go:94] pod "kube-scheduler-addons-463201" is "Ready"
	I1206 10:13:34.952114  489065 pod_ready.go:86] duration metric: took 403.883929ms for pod "kube-scheduler-addons-463201" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:34.952128  489065 pod_ready.go:40] duration metric: took 1.608291584s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 10:13:35.066845  489065 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1206 10:13:35.072221  489065 out.go:179] * Done! kubectl is now configured to use "addons-463201" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 10:16:34 addons-463201 crio[829]: time="2025-12-06T10:16:34.674497135Z" level=info msg="Removed container 403c12a7b221b818af1ae0b12680e7f69a59ae7c1c44f28f1f1ac013f3d37775: kube-system/registry-creds-764b6fb674-d82zs/registry-creds" id=754a48c5-b7a3-4656-b737-81232f7570ea name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 10:16:42 addons-463201 crio[829]: time="2025-12-06T10:16:42.696647687Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-6jmrl/POD" id=33ef31ec-9d22-40e5-992c-952dba7c309d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 10:16:42 addons-463201 crio[829]: time="2025-12-06T10:16:42.696719808Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 10:16:42 addons-463201 crio[829]: time="2025-12-06T10:16:42.70925296Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-6jmrl Namespace:default ID:5849d8c32ba8f385f9f101e6aa68e0b4bf5b44b826e12473e8288ad59d15d7f2 UID:a7afa350-8b36-4709-b742-554c29b6dc85 NetNS:/var/run/netns/0868feca-c602-4a13-9caf-89782fe42040 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000d175c0}] Aliases:map[]}"
	Dec 06 10:16:42 addons-463201 crio[829]: time="2025-12-06T10:16:42.709298013Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-6jmrl to CNI network \"kindnet\" (type=ptp)"
	Dec 06 10:16:42 addons-463201 crio[829]: time="2025-12-06T10:16:42.725125788Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-6jmrl Namespace:default ID:5849d8c32ba8f385f9f101e6aa68e0b4bf5b44b826e12473e8288ad59d15d7f2 UID:a7afa350-8b36-4709-b742-554c29b6dc85 NetNS:/var/run/netns/0868feca-c602-4a13-9caf-89782fe42040 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000d175c0}] Aliases:map[]}"
	Dec 06 10:16:42 addons-463201 crio[829]: time="2025-12-06T10:16:42.725284916Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-6jmrl for CNI network kindnet (type=ptp)"
	Dec 06 10:16:42 addons-463201 crio[829]: time="2025-12-06T10:16:42.729264236Z" level=info msg="Ran pod sandbox 5849d8c32ba8f385f9f101e6aa68e0b4bf5b44b826e12473e8288ad59d15d7f2 with infra container: default/hello-world-app-5d498dc89-6jmrl/POD" id=33ef31ec-9d22-40e5-992c-952dba7c309d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 10:16:42 addons-463201 crio[829]: time="2025-12-06T10:16:42.740191986Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ecbe5106-6873-4eab-8181-8974906e79c4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:16:42 addons-463201 crio[829]: time="2025-12-06T10:16:42.741645112Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=ecbe5106-6873-4eab-8181-8974906e79c4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:16:42 addons-463201 crio[829]: time="2025-12-06T10:16:42.742009699Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=ecbe5106-6873-4eab-8181-8974906e79c4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:16:42 addons-463201 crio[829]: time="2025-12-06T10:16:42.745311085Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=d0d154ca-40c0-4ac9-9c61-c55235411b4e name=/runtime.v1.ImageService/PullImage
	Dec 06 10:16:42 addons-463201 crio[829]: time="2025-12-06T10:16:42.752194424Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 06 10:16:43 addons-463201 crio[829]: time="2025-12-06T10:16:43.363172023Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=d0d154ca-40c0-4ac9-9c61-c55235411b4e name=/runtime.v1.ImageService/PullImage
	Dec 06 10:16:43 addons-463201 crio[829]: time="2025-12-06T10:16:43.363867304Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e0d2706f-c367-40ce-88a5-4c9e21fcffe1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:16:43 addons-463201 crio[829]: time="2025-12-06T10:16:43.367816694Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e1d11ce6-dfe2-4690-a5a3-ac0b6e5d3e0f name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:16:43 addons-463201 crio[829]: time="2025-12-06T10:16:43.377917071Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-6jmrl/hello-world-app" id=832dfcbb-1d31-4152-b86e-c3994b11d359 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 10:16:43 addons-463201 crio[829]: time="2025-12-06T10:16:43.378058959Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 10:16:43 addons-463201 crio[829]: time="2025-12-06T10:16:43.405639552Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 10:16:43 addons-463201 crio[829]: time="2025-12-06T10:16:43.405966725Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2d89c3b64b68d2b6a44a86321ce32ae25fd35319dac0e7fb1c58fa2f2ce26354/merged/etc/passwd: no such file or directory"
	Dec 06 10:16:43 addons-463201 crio[829]: time="2025-12-06T10:16:43.405998027Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2d89c3b64b68d2b6a44a86321ce32ae25fd35319dac0e7fb1c58fa2f2ce26354/merged/etc/group: no such file or directory"
	Dec 06 10:16:43 addons-463201 crio[829]: time="2025-12-06T10:16:43.409475967Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 10:16:43 addons-463201 crio[829]: time="2025-12-06T10:16:43.443603576Z" level=info msg="Created container 7f3ae9684838a64ca40c999b148f4e4f55de3d55090aa1264af299496d921776: default/hello-world-app-5d498dc89-6jmrl/hello-world-app" id=832dfcbb-1d31-4152-b86e-c3994b11d359 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 10:16:43 addons-463201 crio[829]: time="2025-12-06T10:16:43.446621439Z" level=info msg="Starting container: 7f3ae9684838a64ca40c999b148f4e4f55de3d55090aa1264af299496d921776" id=6de9abf3-f46c-40ee-a77c-98de0fd7428e name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 10:16:43 addons-463201 crio[829]: time="2025-12-06T10:16:43.450835889Z" level=info msg="Started container" PID=7154 containerID=7f3ae9684838a64ca40c999b148f4e4f55de3d55090aa1264af299496d921776 description=default/hello-world-app-5d498dc89-6jmrl/hello-world-app id=6de9abf3-f46c-40ee-a77c-98de0fd7428e name=/runtime.v1.RuntimeService/StartContainer sandboxID=5849d8c32ba8f385f9f101e6aa68e0b4bf5b44b826e12473e8288ad59d15d7f2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	7f3ae9684838a       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   5849d8c32ba8f       hello-world-app-5d498dc89-6jmrl             default
	00fc95e04d83a       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             10 seconds ago           Exited              registry-creds                           1                   47a6c8742a4a0       registry-creds-764b6fb674-d82zs             kube-system
	fdd8bcf5da0a9       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   75c976c8783c5       nginx                                       default
	9bb1bb8348b08       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   65df4d7c4fb1b       busybox                                     default
	7f68815caefeb       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             3 minutes ago            Running             controller                               0                   1d12a9b7b2865       ingress-nginx-controller-85d4c799dd-67nbw   ingress-nginx
	5e82e51db5bae       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   1b55cb370f6cc       csi-hostpathplugin-c44tb                    kube-system
	e77ca233e9510       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   1b55cb370f6cc       csi-hostpathplugin-c44tb                    kube-system
	c4503b391c863       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   1b55cb370f6cc       csi-hostpathplugin-c44tb                    kube-system
	d9cc152585cdf       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   1b55cb370f6cc       csi-hostpathplugin-c44tb                    kube-system
	9744230520efe       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   1b55cb370f6cc       csi-hostpathplugin-c44tb                    kube-system
	144ac149fd224       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   92f1092a3f8df       gadget-9sgbv                                gadget
	eaab868690638       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   b5ce36d7cdaf6       gcp-auth-78565c9fb4-kwt2c                   gcp-auth
	3d1cf1b6f7a39       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   3 minutes ago            Exited              patch                                    0                   4b45b7f7de145       ingress-nginx-admission-patch-7snvd         ingress-nginx
	cb95c710b468f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   3 minutes ago            Exited              create                                   0                   76a5926522969       ingress-nginx-admission-create-4jrk5        ingress-nginx
	86fe541e9ba32       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   9287e4f872a57       registry-proxy-k4pz5                        kube-system
	db3f01d09c58d       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   0c44fb6d23380       snapshot-controller-7d9fbc56b8-c9xc4        kube-system
	4c87ded8b1fe6       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   4b340daaafde7       nvidia-device-plugin-daemonset-wq978        kube-system
	daa185ec097b3       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   707fe7a3cbff0       snapshot-controller-7d9fbc56b8-b9lfs        kube-system
	219b695817161       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   7e3333e66ee66       csi-hostpath-resizer-0                      kube-system
	5d4b33e25d2b5       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   1b55cb370f6cc       csi-hostpathplugin-c44tb                    kube-system
	f89b62b37376a       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   8946717deff69       metrics-server-85b7d694d7-ghlgl             kube-system
	5126713beb84f       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               3 minutes ago            Running             cloud-spanner-emulator                   0                   fa6cc860efd5c       cloud-spanner-emulator-5bdddb765-9s7q5      default
	e83032278589e       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   548eede3fea9f       registry-6b586f9694-bq87w                   kube-system
	d78b8a3ab8327       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   66a372752af25       yakd-dashboard-5ff678cb9-9l52n              yakd-dashboard
	0f7b25b5f8b12       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   eb4d151ebc6a6       csi-hostpath-attacher-0                     kube-system
	51c50d8be4bdb       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago            Running             minikube-ingress-dns                     0                   fa328292c2f0c       kube-ingress-dns-minikube                   kube-system
	bf1d5142cb992       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   fabf8129d5e78       local-path-provisioner-648f6765c9-fbm4s     local-path-storage
	775995b1bde62       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   16fe5abd3f7e9       coredns-66bc5c9577-lpwwm                    kube-system
	c53340c2393c0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   a2f6d295744d6       storage-provisioner                         kube-system
	d2c1eed3e4df1       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             5 minutes ago            Running             kube-proxy                               0                   4e73a757c45a4       kube-proxy-c7kr8                            kube-system
	bb2cff19695f3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   e894bb1e0bd7f       kindnet-f7fln                               kube-system
	c1f6dd47829ed       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             5 minutes ago            Running             kube-controller-manager                  0                   8a203a58aa564       kube-controller-manager-addons-463201       kube-system
	0ee8c78f93030       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             5 minutes ago            Running             kube-scheduler                           0                   8e746d9526f4a       kube-scheduler-addons-463201                kube-system
	8372b3ca93930       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             5 minutes ago            Running             kube-apiserver                           0                   00138c2a7b72d       kube-apiserver-addons-463201                kube-system
	5450d6d68764d       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             5 minutes ago            Running             etcd                                     0                   97639256bb112       etcd-addons-463201                          kube-system
	
	
	==> coredns [775995b1bde6256f0e91cb2ba08cf0f4b811366397f6c0515af6b9b8aa4bdd06] <==
	[INFO] 10.244.0.9:38321 - 34585 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.004528693s
	[INFO] 10.244.0.9:38321 - 63111 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000291343s
	[INFO] 10.244.0.9:38321 - 41811 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000298465s
	[INFO] 10.244.0.9:55409 - 23495 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000175775s
	[INFO] 10.244.0.9:55409 - 23167 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000229937s
	[INFO] 10.244.0.9:33727 - 20593 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000111768s
	[INFO] 10.244.0.9:33727 - 20413 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000077783s
	[INFO] 10.244.0.9:51831 - 28968 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000095875s
	[INFO] 10.244.0.9:51831 - 28779 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00007885s
	[INFO] 10.244.0.9:41740 - 54034 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001315694s
	[INFO] 10.244.0.9:41740 - 54503 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001403906s
	[INFO] 10.244.0.9:48397 - 5202 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000114s
	[INFO] 10.244.0.9:48397 - 5063 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000214618s
	[INFO] 10.244.0.19:52086 - 38299 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000159448s
	[INFO] 10.244.0.19:51876 - 57895 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000201235s
	[INFO] 10.244.0.19:40805 - 63109 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127218s
	[INFO] 10.244.0.19:35678 - 24384 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115452s
	[INFO] 10.244.0.19:55474 - 1262 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000184899s
	[INFO] 10.244.0.19:51983 - 41203 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000093455s
	[INFO] 10.244.0.19:53100 - 57107 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002144627s
	[INFO] 10.244.0.19:52116 - 58109 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001624785s
	[INFO] 10.244.0.19:35103 - 18677 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001753274s
	[INFO] 10.244.0.19:34432 - 27458 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001563074s
	[INFO] 10.244.0.23:54098 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000203033s
	[INFO] 10.244.0.23:51329 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000129236s
	
	
	==> describe nodes <==
	Name:               addons-463201
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-463201
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=addons-463201
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T10_11_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-463201
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-463201"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 10:11:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-463201
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 10:16:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 10:16:42 +0000   Sat, 06 Dec 2025 10:11:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 10:16:42 +0000   Sat, 06 Dec 2025 10:11:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 10:16:42 +0000   Sat, 06 Dec 2025 10:11:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 10:16:42 +0000   Sat, 06 Dec 2025 10:12:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-463201
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                f3bf18dd-4afd-449b-b566-938b3500b5d7
	  Boot ID:                    e36fa5c9-4dd5-4964-a1e1-f5022a7b372f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	  default                     cloud-spanner-emulator-5bdddb765-9s7q5       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  default                     hello-world-app-5d498dc89-6jmrl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  gadget                      gadget-9sgbv                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  gcp-auth                    gcp-auth-78565c9fb4-kwt2c                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-67nbw    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m58s
	  kube-system                 coredns-66bc5c9577-lpwwm                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m4s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 csi-hostpathplugin-c44tb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 etcd-addons-463201                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m9s
	  kube-system                 kindnet-f7fln                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m4s
	  kube-system                 kube-apiserver-addons-463201                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-controller-manager-addons-463201        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-proxy-c7kr8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-scheduler-addons-463201                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 metrics-server-85b7d694d7-ghlgl              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m59s
	  kube-system                 nvidia-device-plugin-daemonset-wq978         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 registry-6b586f9694-bq87w                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 registry-creds-764b6fb674-d82zs              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 registry-proxy-k4pz5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 snapshot-controller-7d9fbc56b8-b9lfs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 snapshot-controller-7d9fbc56b8-c9xc4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  local-path-storage          local-path-provisioner-648f6765c9-fbm4s      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-9l52n               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m2s   kube-proxy       
	  Normal   Starting                 5m10s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m10s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m9s   kubelet          Node addons-463201 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m9s   kubelet          Node addons-463201 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m9s   kubelet          Node addons-463201 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m5s   node-controller  Node addons-463201 event: Registered Node addons-463201 in Controller
	  Normal   NodeReady                4m23s  kubelet          Node addons-463201 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 6 08:13] hrtimer: interrupt took 10759856 ns
	[Dec 6 08:20] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000983] FS-Cache: O-cookie d=000000005fa08aa9{9P.session} n=00000000effdd306
	[  +0.001108] FS-Cache: O-key=[10] '34323935383339353739'
	[  +0.000774] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001064] FS-Cache: N-cookie d=000000005fa08aa9{9P.session} n=00000000d1a54e80
	[  +0.001158] FS-Cache: N-key=[10] '34323935383339353739'
	[Dec 6 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.091742] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [5450d6d68764d73b5b2dff2156681b12550ff54b9d5d6ed472c15683bbf31d5e] <==
	{"level":"warn","ts":"2025-12-06T10:11:30.950790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:30.964016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:30.988955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.020817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.033667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.051248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.069284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.100077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.109487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.139468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.158028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.179741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.193191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.211224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.234233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.260494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.280287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.297014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.391248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:47.406551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:47.428517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:12:09.256224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:12:09.285325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:12:09.302662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:12:09.318090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38162","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [eaab8686906381577db71a1d73b4b44c4288542751b2a3715c084555f3e3b1ef] <==
	2025/12/06 10:13:06 GCP Auth Webhook started!
	2025/12/06 10:13:35 Ready to marshal response ...
	2025/12/06 10:13:35 Ready to write response ...
	2025/12/06 10:13:35 Ready to marshal response ...
	2025/12/06 10:13:35 Ready to write response ...
	2025/12/06 10:13:35 Ready to marshal response ...
	2025/12/06 10:13:35 Ready to write response ...
	2025/12/06 10:13:57 Ready to marshal response ...
	2025/12/06 10:13:57 Ready to write response ...
	2025/12/06 10:13:58 Ready to marshal response ...
	2025/12/06 10:13:58 Ready to write response ...
	2025/12/06 10:13:58 Ready to marshal response ...
	2025/12/06 10:13:58 Ready to write response ...
	2025/12/06 10:14:07 Ready to marshal response ...
	2025/12/06 10:14:07 Ready to write response ...
	2025/12/06 10:14:09 Ready to marshal response ...
	2025/12/06 10:14:09 Ready to write response ...
	2025/12/06 10:14:24 Ready to marshal response ...
	2025/12/06 10:14:24 Ready to write response ...
	2025/12/06 10:14:27 Ready to marshal response ...
	2025/12/06 10:14:27 Ready to write response ...
	2025/12/06 10:16:42 Ready to marshal response ...
	2025/12/06 10:16:42 Ready to write response ...
	
	
	==> kernel <==
	 10:16:44 up  2:59,  0 user,  load average: 1.03, 1.16, 1.76
	Linux addons-463201 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bb2cff19695f37ce069323cfab91760c1fe220c0a3edfc6d40f5233021eafcf3] <==
	I1206 10:14:41.126390       1 main.go:301] handling current node
	I1206 10:14:51.126745       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:14:51.126787       1 main.go:301] handling current node
	I1206 10:15:01.131540       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:15:01.131589       1 main.go:301] handling current node
	I1206 10:15:11.135023       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:15:11.135055       1 main.go:301] handling current node
	I1206 10:15:21.126293       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:15:21.126327       1 main.go:301] handling current node
	I1206 10:15:31.135211       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:15:31.135249       1 main.go:301] handling current node
	I1206 10:15:41.135312       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:15:41.135345       1 main.go:301] handling current node
	I1206 10:15:51.131367       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:15:51.131405       1 main.go:301] handling current node
	I1206 10:16:01.135218       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:16:01.135273       1 main.go:301] handling current node
	I1206 10:16:11.135254       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:16:11.135293       1 main.go:301] handling current node
	I1206 10:16:21.132914       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:16:21.132951       1 main.go:301] handling current node
	I1206 10:16:31.125845       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:16:31.125957       1 main.go:301] handling current node
	I1206 10:16:41.130276       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:16:41.130397       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8372b3ca93930cefd069b1589642fa189999760e5a312f2852a05f1c57eef85b] <==
	E1206 10:12:21.494206       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.7.31:443: connect: connection refused" logger="UnhandledError"
	W1206 10:12:46.487146       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 10:12:46.487201       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1206 10:12:46.487217       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 10:12:46.489295       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 10:12:46.489397       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1206 10:12:46.489419       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1206 10:13:00.724858       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.53.42:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.53.42:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.53.42:443: connect: connection refused" logger="UnhandledError"
	W1206 10:13:00.725451       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 10:13:00.725864       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1206 10:13:00.726469       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.53.42:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.53.42:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.53.42:443: connect: connection refused" logger="UnhandledError"
	I1206 10:13:00.855049       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1206 10:13:45.204107       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47804: use of closed network connection
	E1206 10:13:45.498559       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47826: use of closed network connection
	I1206 10:14:20.647832       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1206 10:14:24.057671       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1206 10:14:24.382196       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.112.134"}
	E1206 10:14:36.551789       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1206 10:16:42.599000       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.186.26"}
	
	
	==> kube-controller-manager [c1f6dd47829edac6b0e0c655e8eda525208d5a754d69d91b2b59d4a9d1200f84] <==
	I1206 10:11:39.284333       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1206 10:11:39.287581       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1206 10:11:39.287681       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1206 10:11:39.288330       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1206 10:11:39.288843       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1206 10:11:39.288919       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1206 10:11:39.288955       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 10:11:39.297515       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1206 10:11:39.297599       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1206 10:11:39.297631       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1206 10:11:39.297645       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1206 10:11:39.297653       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1206 10:11:39.304998       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 10:11:39.307912       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-463201" podCIDRs=["10.244.0.0/24"]
	E1206 10:11:45.695040       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1206 10:12:09.248579       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1206 10:12:09.248773       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1206 10:12:09.248828       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1206 10:12:09.277253       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1206 10:12:09.291796       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1206 10:12:09.349917       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 10:12:09.393095       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 10:12:24.237702       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1206 10:12:39.354962       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1206 10:12:39.402901       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [d2c1eed3e4df19803bddda45d1cc596ba92381d494b9bef49dc118075e0e83f3] <==
	I1206 10:11:41.451245       1 server_linux.go:53] "Using iptables proxy"
	I1206 10:11:41.544883       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 10:11:41.646047       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 10:11:41.646113       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 10:11:41.646236       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 10:11:41.691386       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 10:11:41.691442       1 server_linux.go:132] "Using iptables Proxier"
	I1206 10:11:41.700661       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 10:11:41.700967       1 server.go:527] "Version info" version="v1.34.2"
	I1206 10:11:41.700980       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 10:11:41.703459       1 config.go:200] "Starting service config controller"
	I1206 10:11:41.703470       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 10:11:41.703489       1 config.go:106] "Starting endpoint slice config controller"
	I1206 10:11:41.703493       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 10:11:41.703507       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 10:11:41.703511       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 10:11:41.704155       1 config.go:309] "Starting node config controller"
	I1206 10:11:41.704162       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 10:11:41.704169       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 10:11:41.804540       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 10:11:41.804604       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 10:11:41.804833       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0ee8c78f93030e10d80b5a240b46a2f842e44c5e7f15b05425a8ff1f45bee309] <==
	E1206 10:11:32.541395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1206 10:11:32.543862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 10:11:32.543984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 10:11:32.544114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 10:11:32.545599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 10:11:32.549291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 10:11:32.550081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 10:11:32.549551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 10:11:32.549595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 10:11:32.549641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 10:11:32.549697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 10:11:32.549740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 10:11:32.549783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 10:11:32.550232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 10:11:32.550283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 10:11:32.550321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 10:11:32.550362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 10:11:32.549497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 10:11:33.364917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 10:11:33.377457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 10:11:33.483227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 10:11:33.550504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 10:11:33.578757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1206 10:11:33.591413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1206 10:11:35.931539       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 10:14:36 addons-463201 kubelet[1291]: I1206 10:14:36.338060    1291 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-1aa8329a-92e8-4f10-9d9d-03fe30fa2040\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^58a1a20a-d28c-11f0-a220-22bc87f09b72\") on node \"addons-463201\" "
	Dec 06 10:14:36 addons-463201 kubelet[1291]: I1206 10:14:36.343100    1291 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-1aa8329a-92e8-4f10-9d9d-03fe30fa2040" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^58a1a20a-d28c-11f0-a220-22bc87f09b72") on node "addons-463201"
	Dec 06 10:14:36 addons-463201 kubelet[1291]: I1206 10:14:36.438417    1291 reconciler_common.go:299] "Volume detached for volume \"pvc-1aa8329a-92e8-4f10-9d9d-03fe30fa2040\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^58a1a20a-d28c-11f0-a220-22bc87f09b72\") on node \"addons-463201\" DevicePath \"\""
	Dec 06 10:14:36 addons-463201 kubelet[1291]: I1206 10:14:36.944514    1291 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62595b31-17f9-44cd-8a12-58b3a165eb92" path="/var/lib/kubelet/pods/62595b31-17f9-44cd-8a12-58b3a165eb92/volumes"
	Dec 06 10:15:00 addons-463201 kubelet[1291]: I1206 10:15:00.939456    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-bq87w" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 10:15:29 addons-463201 kubelet[1291]: I1206 10:15:29.939896    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-k4pz5" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 10:15:35 addons-463201 kubelet[1291]: E1206 10:15:35.094170    1291 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5aaef92854b5c45d658d22dcf2ceeca8940488d895407d02689eda8534a069ea/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5aaef92854b5c45d658d22dcf2ceeca8940488d895407d02689eda8534a069ea/diff: no such file or directory, extraDiskErr: <nil>
	Dec 06 10:15:35 addons-463201 kubelet[1291]: E1206 10:15:35.094308    1291 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c9ccb8b4776fad1fedbd9627483eb543355c84c23d8d8c7dea99ce9929918989/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c9ccb8b4776fad1fedbd9627483eb543355c84c23d8d8c7dea99ce9929918989/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/default_test-local-path_bef56095-296c-4cc1-ba41-e84bed9e7ba9/busybox/0.log" to get inode usage: stat /var/log/pods/default_test-local-path_bef56095-296c-4cc1-ba41-e84bed9e7ba9/busybox/0.log: no such file or directory
	Dec 06 10:15:35 addons-463201 kubelet[1291]: E1206 10:15:35.094346    1291 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/bfad8cd418be636828d04f12978dbd515a544017b062662f4d7869634432db37/diff" to get inode usage: stat /var/lib/containers/storage/overlay/bfad8cd418be636828d04f12978dbd515a544017b062662f4d7869634432db37/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/default_task-pv-pod-restore_62595b31-17f9-44cd-8a12-58b3a165eb92/task-pv-container/0.log" to get inode usage: stat /var/log/pods/default_task-pv-pod-restore_62595b31-17f9-44cd-8a12-58b3a165eb92/task-pv-container/0.log: no such file or directory
	Dec 06 10:15:35 addons-463201 kubelet[1291]: E1206 10:15:35.095024    1291 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/945572e906e0cfad5a9bb519d39472f679fce070a9c3306922bcf994fb044676/diff" to get inode usage: stat /var/lib/containers/storage/overlay/945572e906e0cfad5a9bb519d39472f679fce070a9c3306922bcf994fb044676/diff: no such file or directory, extraDiskErr: <nil>
	Dec 06 10:15:38 addons-463201 kubelet[1291]: I1206 10:15:38.939692    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-wq978" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 10:16:19 addons-463201 kubelet[1291]: I1206 10:16:19.940027    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-bq87w" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 10:16:31 addons-463201 kubelet[1291]: I1206 10:16:31.840550    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-d82zs" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 10:16:33 addons-463201 kubelet[1291]: I1206 10:16:33.650055    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-d82zs" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 10:16:33 addons-463201 kubelet[1291]: I1206 10:16:33.650540    1291 scope.go:117] "RemoveContainer" containerID="403c12a7b221b818af1ae0b12680e7f69a59ae7c1c44f28f1f1ac013f3d37775"
	Dec 06 10:16:34 addons-463201 kubelet[1291]: I1206 10:16:34.657033    1291 scope.go:117] "RemoveContainer" containerID="403c12a7b221b818af1ae0b12680e7f69a59ae7c1c44f28f1f1ac013f3d37775"
	Dec 06 10:16:34 addons-463201 kubelet[1291]: I1206 10:16:34.657375    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-d82zs" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 10:16:34 addons-463201 kubelet[1291]: I1206 10:16:34.657414    1291 scope.go:117] "RemoveContainer" containerID="00fc95e04d83ac52bcdd755e62c33e8c5dc63a522ee05c092b3802b1529db75c"
	Dec 06 10:16:34 addons-463201 kubelet[1291]: E1206 10:16:34.657565    1291 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-d82zs_kube-system(58e598f1-3fd2-4d98-a425-39e32abce39a)\"" pod="kube-system/registry-creds-764b6fb674-d82zs" podUID="58e598f1-3fd2-4d98-a425-39e32abce39a"
	Dec 06 10:16:35 addons-463201 kubelet[1291]: E1206 10:16:35.094295    1291 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/382161c6086e44c107e79c1553b1382d20f8fe7e6289d967bc0707674b141cdc/diff" to get inode usage: stat /var/lib/containers/storage/overlay/382161c6086e44c107e79c1553b1382d20f8fe7e6289d967bc0707674b141cdc/diff: no such file or directory, extraDiskErr: <nil>
	Dec 06 10:16:35 addons-463201 kubelet[1291]: I1206 10:16:35.662371    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-d82zs" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 10:16:35 addons-463201 kubelet[1291]: I1206 10:16:35.662428    1291 scope.go:117] "RemoveContainer" containerID="00fc95e04d83ac52bcdd755e62c33e8c5dc63a522ee05c092b3802b1529db75c"
	Dec 06 10:16:35 addons-463201 kubelet[1291]: E1206 10:16:35.662570    1291 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-d82zs_kube-system(58e598f1-3fd2-4d98-a425-39e32abce39a)\"" pod="kube-system/registry-creds-764b6fb674-d82zs" podUID="58e598f1-3fd2-4d98-a425-39e32abce39a"
	Dec 06 10:16:42 addons-463201 kubelet[1291]: I1206 10:16:42.443547    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a7afa350-8b36-4709-b742-554c29b6dc85-gcp-creds\") pod \"hello-world-app-5d498dc89-6jmrl\" (UID: \"a7afa350-8b36-4709-b742-554c29b6dc85\") " pod="default/hello-world-app-5d498dc89-6jmrl"
	Dec 06 10:16:42 addons-463201 kubelet[1291]: I1206 10:16:42.444234    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6btz\" (UniqueName: \"kubernetes.io/projected/a7afa350-8b36-4709-b742-554c29b6dc85-kube-api-access-q6btz\") pod \"hello-world-app-5d498dc89-6jmrl\" (UID: \"a7afa350-8b36-4709-b742-554c29b6dc85\") " pod="default/hello-world-app-5d498dc89-6jmrl"
	
	
	==> storage-provisioner [c53340c2393c0b3642954671262ecfd1669b5cf00c3682409e1452a943becd27] <==
	W1206 10:16:19.651623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:21.654531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:21.661254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:23.665052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:23.669765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:25.672672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:25.677202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:27.682436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:27.690028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:29.693000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:29.697512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:31.700222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:31.704645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:33.709124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:33.714974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:35.719444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:35.727087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:37.729779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:37.734495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:39.738047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:39.742852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:41.747043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:41.752071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:43.759438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:16:43.765482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-463201 -n addons-463201
helpers_test.go:269: (dbg) Run:  kubectl --context addons-463201 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-4jrk5 ingress-nginx-admission-patch-7snvd
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-463201 describe pod ingress-nginx-admission-create-4jrk5 ingress-nginx-admission-patch-7snvd
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-463201 describe pod ingress-nginx-admission-create-4jrk5 ingress-nginx-admission-patch-7snvd: exit status 1 (101.160542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4jrk5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7snvd" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-463201 describe pod ingress-nginx-admission-create-4jrk5 ingress-nginx-admission-patch-7snvd: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-463201 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-463201 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (285.415861ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:16:45.921336  498617 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:16:45.922179  498617 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:16:45.922198  498617 out.go:374] Setting ErrFile to fd 2...
	I1206 10:16:45.922235  498617 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:16:45.922582  498617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:16:45.923059  498617 mustload.go:66] Loading cluster: addons-463201
	I1206 10:16:45.923551  498617 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:16:45.923573  498617 addons.go:622] checking whether the cluster is paused
	I1206 10:16:45.923734  498617 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:16:45.923752  498617 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:16:45.924352  498617 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:16:45.947955  498617 ssh_runner.go:195] Run: systemctl --version
	I1206 10:16:45.948010  498617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:16:45.973045  498617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:16:46.082715  498617 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:16:46.082896  498617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:16:46.121467  498617 cri.go:89] found id: "00fc95e04d83ac52bcdd755e62c33e8c5dc63a522ee05c092b3802b1529db75c"
	I1206 10:16:46.121493  498617 cri.go:89] found id: "5e82e51db5bae25edf1ad1f508561386faaf99fd749011f29643c994073fa82b"
	I1206 10:16:46.121498  498617 cri.go:89] found id: "e77ca233e95107b4650f789b058a9248eeafd977bfd0e3e94a8d154fb5be3203"
	I1206 10:16:46.121502  498617 cri.go:89] found id: "c4503b391c863e57c177f1e38dbf4df0e7bf25a9bfce9636b16547f1716b56f8"
	I1206 10:16:46.121506  498617 cri.go:89] found id: "d9cc152585cdfdae2e9fa73bb5a04625634fc98fdb53f8478f515311bf505603"
	I1206 10:16:46.121509  498617 cri.go:89] found id: "9744230520efec6c5427d659b020b8aebb0b8f60700b10a90e7568fb4f02feeb"
	I1206 10:16:46.121513  498617 cri.go:89] found id: "86fe541e9ba32ec7aff4e42066e190257fa61374b4e4a034d6e07bd405c91ae1"
	I1206 10:16:46.121516  498617 cri.go:89] found id: "db3f01d09c58d50b8b04347a9153def1aa334e50b6801cc14c91854950ca696a"
	I1206 10:16:46.121519  498617 cri.go:89] found id: "4c87ded8b1fe686fcfd044face4400839c48d991708ed4488d50c44e540be635"
	I1206 10:16:46.121529  498617 cri.go:89] found id: "daa185ec097b31f4e9c67362d59fdb4151fc734d34ff93626ef2bcc21dd41036"
	I1206 10:16:46.121537  498617 cri.go:89] found id: "219b6958171610a9c3b443e6cc8356719046bd89709701802b0a11664a7582b7"
	I1206 10:16:46.121541  498617 cri.go:89] found id: "5d4b33e25d2b558bac3cb545fadd5a94e7ab0be1221ebd35901d30915d5be267"
	I1206 10:16:46.121545  498617 cri.go:89] found id: "f89b62b37376ad66088e24fafa491c343778d9dd1380d9f7fdfdb93b4d59ba53"
	I1206 10:16:46.121556  498617 cri.go:89] found id: "e83032278589ef0f7ecb2c85a3dcc4c6b5b9568e74082e79289f5260ebb38645"
	I1206 10:16:46.121560  498617 cri.go:89] found id: "0f7b25b5f8b12a8b79f60d93bb10fb9ef6cda6c774739cae2d1ce2050af758c1"
	I1206 10:16:46.121568  498617 cri.go:89] found id: "51c50d8be4bdb4c0166e709b97f80c463fbbc2f2f018a946cf77419a29b72cda"
	I1206 10:16:46.121578  498617 cri.go:89] found id: "775995b1bde6256f0e91cb2ba08cf0f4b811366397f6c0515af6b9b8aa4bdd06"
	I1206 10:16:46.121583  498617 cri.go:89] found id: "c53340c2393c0b3642954671262ecfd1669b5cf00c3682409e1452a943becd27"
	I1206 10:16:46.121586  498617 cri.go:89] found id: "d2c1eed3e4df19803bddda45d1cc596ba92381d494b9bef49dc118075e0e83f3"
	I1206 10:16:46.121590  498617 cri.go:89] found id: "bb2cff19695f37ce069323cfab91760c1fe220c0a3edfc6d40f5233021eafcf3"
	I1206 10:16:46.121594  498617 cri.go:89] found id: "c1f6dd47829edac6b0e0c655e8eda525208d5a754d69d91b2b59d4a9d1200f84"
	I1206 10:16:46.121598  498617 cri.go:89] found id: "0ee8c78f93030e10d80b5a240b46a2f842e44c5e7f15b05425a8ff1f45bee309"
	I1206 10:16:46.121601  498617 cri.go:89] found id: "8372b3ca93930cefd069b1589642fa189999760e5a312f2852a05f1c57eef85b"
	I1206 10:16:46.121604  498617 cri.go:89] found id: "5450d6d68764d73b5b2dff2156681b12550ff54b9d5d6ed472c15683bbf31d5e"
	I1206 10:16:46.121608  498617 cri.go:89] found id: ""
	I1206 10:16:46.121665  498617 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:16:46.137478  498617 out.go:203] 
	W1206 10:16:46.140392  498617 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:16:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:16:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:16:46.140434  498617 out.go:285] * 
	* 
	W1206 10:16:46.146947  498617 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:16:46.150735  498617 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-463201 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-463201 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-463201 addons disable ingress --alsologtostderr -v=1: exit status 11 (281.282229ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:16:46.225346  498663 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:16:46.226058  498663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:16:46.226073  498663 out.go:374] Setting ErrFile to fd 2...
	I1206 10:16:46.226080  498663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:16:46.226384  498663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:16:46.226727  498663 mustload.go:66] Loading cluster: addons-463201
	I1206 10:16:46.227257  498663 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:16:46.227294  498663 addons.go:622] checking whether the cluster is paused
	I1206 10:16:46.227497  498663 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:16:46.227518  498663 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:16:46.228076  498663 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:16:46.246177  498663 ssh_runner.go:195] Run: systemctl --version
	I1206 10:16:46.246246  498663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:16:46.266156  498663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:16:46.374015  498663 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:16:46.374123  498663 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:16:46.404842  498663 cri.go:89] found id: "00fc95e04d83ac52bcdd755e62c33e8c5dc63a522ee05c092b3802b1529db75c"
	I1206 10:16:46.404866  498663 cri.go:89] found id: "5e82e51db5bae25edf1ad1f508561386faaf99fd749011f29643c994073fa82b"
	I1206 10:16:46.404887  498663 cri.go:89] found id: "e77ca233e95107b4650f789b058a9248eeafd977bfd0e3e94a8d154fb5be3203"
	I1206 10:16:46.404891  498663 cri.go:89] found id: "c4503b391c863e57c177f1e38dbf4df0e7bf25a9bfce9636b16547f1716b56f8"
	I1206 10:16:46.404895  498663 cri.go:89] found id: "d9cc152585cdfdae2e9fa73bb5a04625634fc98fdb53f8478f515311bf505603"
	I1206 10:16:46.404900  498663 cri.go:89] found id: "9744230520efec6c5427d659b020b8aebb0b8f60700b10a90e7568fb4f02feeb"
	I1206 10:16:46.404903  498663 cri.go:89] found id: "86fe541e9ba32ec7aff4e42066e190257fa61374b4e4a034d6e07bd405c91ae1"
	I1206 10:16:46.404906  498663 cri.go:89] found id: "db3f01d09c58d50b8b04347a9153def1aa334e50b6801cc14c91854950ca696a"
	I1206 10:16:46.404909  498663 cri.go:89] found id: "4c87ded8b1fe686fcfd044face4400839c48d991708ed4488d50c44e540be635"
	I1206 10:16:46.404920  498663 cri.go:89] found id: "daa185ec097b31f4e9c67362d59fdb4151fc734d34ff93626ef2bcc21dd41036"
	I1206 10:16:46.404926  498663 cri.go:89] found id: "219b6958171610a9c3b443e6cc8356719046bd89709701802b0a11664a7582b7"
	I1206 10:16:46.404929  498663 cri.go:89] found id: "5d4b33e25d2b558bac3cb545fadd5a94e7ab0be1221ebd35901d30915d5be267"
	I1206 10:16:46.404934  498663 cri.go:89] found id: "f89b62b37376ad66088e24fafa491c343778d9dd1380d9f7fdfdb93b4d59ba53"
	I1206 10:16:46.404940  498663 cri.go:89] found id: "e83032278589ef0f7ecb2c85a3dcc4c6b5b9568e74082e79289f5260ebb38645"
	I1206 10:16:46.404943  498663 cri.go:89] found id: "0f7b25b5f8b12a8b79f60d93bb10fb9ef6cda6c774739cae2d1ce2050af758c1"
	I1206 10:16:46.404950  498663 cri.go:89] found id: "51c50d8be4bdb4c0166e709b97f80c463fbbc2f2f018a946cf77419a29b72cda"
	I1206 10:16:46.404954  498663 cri.go:89] found id: "775995b1bde6256f0e91cb2ba08cf0f4b811366397f6c0515af6b9b8aa4bdd06"
	I1206 10:16:46.404958  498663 cri.go:89] found id: "c53340c2393c0b3642954671262ecfd1669b5cf00c3682409e1452a943becd27"
	I1206 10:16:46.404961  498663 cri.go:89] found id: "d2c1eed3e4df19803bddda45d1cc596ba92381d494b9bef49dc118075e0e83f3"
	I1206 10:16:46.404964  498663 cri.go:89] found id: "bb2cff19695f37ce069323cfab91760c1fe220c0a3edfc6d40f5233021eafcf3"
	I1206 10:16:46.404969  498663 cri.go:89] found id: "c1f6dd47829edac6b0e0c655e8eda525208d5a754d69d91b2b59d4a9d1200f84"
	I1206 10:16:46.404975  498663 cri.go:89] found id: "0ee8c78f93030e10d80b5a240b46a2f842e44c5e7f15b05425a8ff1f45bee309"
	I1206 10:16:46.404978  498663 cri.go:89] found id: "8372b3ca93930cefd069b1589642fa189999760e5a312f2852a05f1c57eef85b"
	I1206 10:16:46.404981  498663 cri.go:89] found id: "5450d6d68764d73b5b2dff2156681b12550ff54b9d5d6ed472c15683bbf31d5e"
	I1206 10:16:46.404984  498663 cri.go:89] found id: ""
	I1206 10:16:46.405038  498663 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:16:46.420170  498663 out.go:203] 
	W1206 10:16:46.423048  498663 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:16:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:16:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:16:46.423067  498663 out.go:285] * 
	* 
	W1206 10:16:46.429817  498663 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:16:46.432669  498663 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-463201 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (142.71s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-9sgbv" [049cecf9-6513-4e4e-a11e-ed4a514772bd] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003300931s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-463201 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-463201 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (264.262061ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:14:23.507065  496711 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:14:23.507999  496711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:14:23.508041  496711 out.go:374] Setting ErrFile to fd 2...
	I1206 10:14:23.508065  496711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:14:23.508374  496711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:14:23.508699  496711 mustload.go:66] Loading cluster: addons-463201
	I1206 10:14:23.509166  496711 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:14:23.509213  496711 addons.go:622] checking whether the cluster is paused
	I1206 10:14:23.509347  496711 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:14:23.509385  496711 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:14:23.509900  496711 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:14:23.526887  496711 ssh_runner.go:195] Run: systemctl --version
	I1206 10:14:23.526956  496711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:14:23.544228  496711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:14:23.656171  496711 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:14:23.656260  496711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:14:23.689217  496711 cri.go:89] found id: "5e82e51db5bae25edf1ad1f508561386faaf99fd749011f29643c994073fa82b"
	I1206 10:14:23.689238  496711 cri.go:89] found id: "e77ca233e95107b4650f789b058a9248eeafd977bfd0e3e94a8d154fb5be3203"
	I1206 10:14:23.689243  496711 cri.go:89] found id: "c4503b391c863e57c177f1e38dbf4df0e7bf25a9bfce9636b16547f1716b56f8"
	I1206 10:14:23.689255  496711 cri.go:89] found id: "d9cc152585cdfdae2e9fa73bb5a04625634fc98fdb53f8478f515311bf505603"
	I1206 10:14:23.689258  496711 cri.go:89] found id: "9744230520efec6c5427d659b020b8aebb0b8f60700b10a90e7568fb4f02feeb"
	I1206 10:14:23.689262  496711 cri.go:89] found id: "86fe541e9ba32ec7aff4e42066e190257fa61374b4e4a034d6e07bd405c91ae1"
	I1206 10:14:23.689266  496711 cri.go:89] found id: "db3f01d09c58d50b8b04347a9153def1aa334e50b6801cc14c91854950ca696a"
	I1206 10:14:23.689269  496711 cri.go:89] found id: "4c87ded8b1fe686fcfd044face4400839c48d991708ed4488d50c44e540be635"
	I1206 10:14:23.689272  496711 cri.go:89] found id: "daa185ec097b31f4e9c67362d59fdb4151fc734d34ff93626ef2bcc21dd41036"
	I1206 10:14:23.689277  496711 cri.go:89] found id: "219b6958171610a9c3b443e6cc8356719046bd89709701802b0a11664a7582b7"
	I1206 10:14:23.689281  496711 cri.go:89] found id: "5d4b33e25d2b558bac3cb545fadd5a94e7ab0be1221ebd35901d30915d5be267"
	I1206 10:14:23.689284  496711 cri.go:89] found id: "f89b62b37376ad66088e24fafa491c343778d9dd1380d9f7fdfdb93b4d59ba53"
	I1206 10:14:23.689287  496711 cri.go:89] found id: "e83032278589ef0f7ecb2c85a3dcc4c6b5b9568e74082e79289f5260ebb38645"
	I1206 10:14:23.689291  496711 cri.go:89] found id: "0f7b25b5f8b12a8b79f60d93bb10fb9ef6cda6c774739cae2d1ce2050af758c1"
	I1206 10:14:23.689294  496711 cri.go:89] found id: "51c50d8be4bdb4c0166e709b97f80c463fbbc2f2f018a946cf77419a29b72cda"
	I1206 10:14:23.689300  496711 cri.go:89] found id: "775995b1bde6256f0e91cb2ba08cf0f4b811366397f6c0515af6b9b8aa4bdd06"
	I1206 10:14:23.689307  496711 cri.go:89] found id: "c53340c2393c0b3642954671262ecfd1669b5cf00c3682409e1452a943becd27"
	I1206 10:14:23.689310  496711 cri.go:89] found id: "d2c1eed3e4df19803bddda45d1cc596ba92381d494b9bef49dc118075e0e83f3"
	I1206 10:14:23.689313  496711 cri.go:89] found id: "bb2cff19695f37ce069323cfab91760c1fe220c0a3edfc6d40f5233021eafcf3"
	I1206 10:14:23.689317  496711 cri.go:89] found id: "c1f6dd47829edac6b0e0c655e8eda525208d5a754d69d91b2b59d4a9d1200f84"
	I1206 10:14:23.689322  496711 cri.go:89] found id: "0ee8c78f93030e10d80b5a240b46a2f842e44c5e7f15b05425a8ff1f45bee309"
	I1206 10:14:23.689325  496711 cri.go:89] found id: "8372b3ca93930cefd069b1589642fa189999760e5a312f2852a05f1c57eef85b"
	I1206 10:14:23.689328  496711 cri.go:89] found id: "5450d6d68764d73b5b2dff2156681b12550ff54b9d5d6ed472c15683bbf31d5e"
	I1206 10:14:23.689331  496711 cri.go:89] found id: ""
	I1206 10:14:23.689382  496711 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:14:23.704955  496711 out.go:203] 
	W1206 10:14:23.707961  496711 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:14:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:14:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:14:23.707991  496711 out.go:285] * 
	* 
	W1206 10:14:23.715334  496711 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:14:23.718230  496711 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-463201 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 9.020762ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-ghlgl" [e2a1839a-8335-4ab7-9136-7a3bb928ad38] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004600865s
addons_test.go:463: (dbg) Run:  kubectl --context addons-463201 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-463201 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-463201 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (270.688271ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:14:18.245503  496588 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:14:18.246365  496588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:14:18.246415  496588 out.go:374] Setting ErrFile to fd 2...
	I1206 10:14:18.246439  496588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:14:18.247112  496588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:14:18.247518  496588 mustload.go:66] Loading cluster: addons-463201
	I1206 10:14:18.247977  496588 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:14:18.248017  496588 addons.go:622] checking whether the cluster is paused
	I1206 10:14:18.248171  496588 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:14:18.248202  496588 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:14:18.248763  496588 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:14:18.266654  496588 ssh_runner.go:195] Run: systemctl --version
	I1206 10:14:18.266803  496588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:14:18.285661  496588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:14:18.393859  496588 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:14:18.393969  496588 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:14:18.424410  496588 cri.go:89] found id: "5e82e51db5bae25edf1ad1f508561386faaf99fd749011f29643c994073fa82b"
	I1206 10:14:18.424440  496588 cri.go:89] found id: "e77ca233e95107b4650f789b058a9248eeafd977bfd0e3e94a8d154fb5be3203"
	I1206 10:14:18.424446  496588 cri.go:89] found id: "c4503b391c863e57c177f1e38dbf4df0e7bf25a9bfce9636b16547f1716b56f8"
	I1206 10:14:18.424451  496588 cri.go:89] found id: "d9cc152585cdfdae2e9fa73bb5a04625634fc98fdb53f8478f515311bf505603"
	I1206 10:14:18.424454  496588 cri.go:89] found id: "9744230520efec6c5427d659b020b8aebb0b8f60700b10a90e7568fb4f02feeb"
	I1206 10:14:18.424458  496588 cri.go:89] found id: "86fe541e9ba32ec7aff4e42066e190257fa61374b4e4a034d6e07bd405c91ae1"
	I1206 10:14:18.424462  496588 cri.go:89] found id: "db3f01d09c58d50b8b04347a9153def1aa334e50b6801cc14c91854950ca696a"
	I1206 10:14:18.424465  496588 cri.go:89] found id: "4c87ded8b1fe686fcfd044face4400839c48d991708ed4488d50c44e540be635"
	I1206 10:14:18.424468  496588 cri.go:89] found id: "daa185ec097b31f4e9c67362d59fdb4151fc734d34ff93626ef2bcc21dd41036"
	I1206 10:14:18.424477  496588 cri.go:89] found id: "219b6958171610a9c3b443e6cc8356719046bd89709701802b0a11664a7582b7"
	I1206 10:14:18.424480  496588 cri.go:89] found id: "5d4b33e25d2b558bac3cb545fadd5a94e7ab0be1221ebd35901d30915d5be267"
	I1206 10:14:18.424483  496588 cri.go:89] found id: "f89b62b37376ad66088e24fafa491c343778d9dd1380d9f7fdfdb93b4d59ba53"
	I1206 10:14:18.424487  496588 cri.go:89] found id: "e83032278589ef0f7ecb2c85a3dcc4c6b5b9568e74082e79289f5260ebb38645"
	I1206 10:14:18.424491  496588 cri.go:89] found id: "0f7b25b5f8b12a8b79f60d93bb10fb9ef6cda6c774739cae2d1ce2050af758c1"
	I1206 10:14:18.424494  496588 cri.go:89] found id: "51c50d8be4bdb4c0166e709b97f80c463fbbc2f2f018a946cf77419a29b72cda"
	I1206 10:14:18.424504  496588 cri.go:89] found id: "775995b1bde6256f0e91cb2ba08cf0f4b811366397f6c0515af6b9b8aa4bdd06"
	I1206 10:14:18.424511  496588 cri.go:89] found id: "c53340c2393c0b3642954671262ecfd1669b5cf00c3682409e1452a943becd27"
	I1206 10:14:18.424517  496588 cri.go:89] found id: "d2c1eed3e4df19803bddda45d1cc596ba92381d494b9bef49dc118075e0e83f3"
	I1206 10:14:18.424520  496588 cri.go:89] found id: "bb2cff19695f37ce069323cfab91760c1fe220c0a3edfc6d40f5233021eafcf3"
	I1206 10:14:18.424524  496588 cri.go:89] found id: "c1f6dd47829edac6b0e0c655e8eda525208d5a754d69d91b2b59d4a9d1200f84"
	I1206 10:14:18.424560  496588 cri.go:89] found id: "0ee8c78f93030e10d80b5a240b46a2f842e44c5e7f15b05425a8ff1f45bee309"
	I1206 10:14:18.424568  496588 cri.go:89] found id: "8372b3ca93930cefd069b1589642fa189999760e5a312f2852a05f1c57eef85b"
	I1206 10:14:18.424572  496588 cri.go:89] found id: "5450d6d68764d73b5b2dff2156681b12550ff54b9d5d6ed472c15683bbf31d5e"
	I1206 10:14:18.424576  496588 cri.go:89] found id: ""
	I1206 10:14:18.424633  496588 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:14:18.440075  496588 out.go:203] 
	W1206 10:14:18.442950  496588 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:14:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:14:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:14:18.442976  496588 out.go:285] * 
	* 
	W1206 10:14:18.449622  496588 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:14:18.452397  496588 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-463201 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (29.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1206 10:14:08.008664  488068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1206 10:14:08.021012  488068 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1206 10:14:08.021041  488068 kapi.go:107] duration metric: took 12.393ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 12.404767ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-463201 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-463201 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-463201 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-463201 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [b6249c13-538c-4d22-be4c-45f82a0567d1] Pending
helpers_test.go:352: "task-pv-pod" [b6249c13-538c-4d22-be4c-45f82a0567d1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [b6249c13-538c-4d22-be4c-45f82a0567d1] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003785393s
addons_test.go:572: (dbg) Run:  kubectl --context addons-463201 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-463201 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-463201 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-463201 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-463201 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-463201 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-463201 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-463201 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-463201 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-463201 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-463201 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-463201 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-463201 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [62595b31-17f9-44cd-8a12-58b3a165eb92] Pending
helpers_test.go:352: "task-pv-pod-restore" [62595b31-17f9-44cd-8a12-58b3a165eb92] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [62595b31-17f9-44cd-8a12-58b3a165eb92] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00354482s
addons_test.go:614: (dbg) Run:  kubectl --context addons-463201 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-463201 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-463201 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-463201 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-463201 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (278.116481ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:14:36.989011  497305 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:14:36.989870  497305 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:14:36.989908  497305 out.go:374] Setting ErrFile to fd 2...
	I1206 10:14:36.989928  497305 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:14:36.990207  497305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:14:36.990520  497305 mustload.go:66] Loading cluster: addons-463201
	I1206 10:14:36.990933  497305 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:14:36.990984  497305 addons.go:622] checking whether the cluster is paused
	I1206 10:14:36.991173  497305 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:14:36.991213  497305 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:14:36.991781  497305 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:14:37.017655  497305 ssh_runner.go:195] Run: systemctl --version
	I1206 10:14:37.017711  497305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:14:37.039269  497305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:14:37.153655  497305 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:14:37.153744  497305 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:14:37.185375  497305 cri.go:89] found id: "5e82e51db5bae25edf1ad1f508561386faaf99fd749011f29643c994073fa82b"
	I1206 10:14:37.185452  497305 cri.go:89] found id: "e77ca233e95107b4650f789b058a9248eeafd977bfd0e3e94a8d154fb5be3203"
	I1206 10:14:37.185464  497305 cri.go:89] found id: "c4503b391c863e57c177f1e38dbf4df0e7bf25a9bfce9636b16547f1716b56f8"
	I1206 10:14:37.185469  497305 cri.go:89] found id: "d9cc152585cdfdae2e9fa73bb5a04625634fc98fdb53f8478f515311bf505603"
	I1206 10:14:37.185472  497305 cri.go:89] found id: "9744230520efec6c5427d659b020b8aebb0b8f60700b10a90e7568fb4f02feeb"
	I1206 10:14:37.185476  497305 cri.go:89] found id: "86fe541e9ba32ec7aff4e42066e190257fa61374b4e4a034d6e07bd405c91ae1"
	I1206 10:14:37.185479  497305 cri.go:89] found id: "db3f01d09c58d50b8b04347a9153def1aa334e50b6801cc14c91854950ca696a"
	I1206 10:14:37.185482  497305 cri.go:89] found id: "4c87ded8b1fe686fcfd044face4400839c48d991708ed4488d50c44e540be635"
	I1206 10:14:37.185485  497305 cri.go:89] found id: "daa185ec097b31f4e9c67362d59fdb4151fc734d34ff93626ef2bcc21dd41036"
	I1206 10:14:37.185491  497305 cri.go:89] found id: "219b6958171610a9c3b443e6cc8356719046bd89709701802b0a11664a7582b7"
	I1206 10:14:37.185495  497305 cri.go:89] found id: "5d4b33e25d2b558bac3cb545fadd5a94e7ab0be1221ebd35901d30915d5be267"
	I1206 10:14:37.185499  497305 cri.go:89] found id: "f89b62b37376ad66088e24fafa491c343778d9dd1380d9f7fdfdb93b4d59ba53"
	I1206 10:14:37.185502  497305 cri.go:89] found id: "e83032278589ef0f7ecb2c85a3dcc4c6b5b9568e74082e79289f5260ebb38645"
	I1206 10:14:37.185505  497305 cri.go:89] found id: "0f7b25b5f8b12a8b79f60d93bb10fb9ef6cda6c774739cae2d1ce2050af758c1"
	I1206 10:14:37.185508  497305 cri.go:89] found id: "51c50d8be4bdb4c0166e709b97f80c463fbbc2f2f018a946cf77419a29b72cda"
	I1206 10:14:37.185518  497305 cri.go:89] found id: "775995b1bde6256f0e91cb2ba08cf0f4b811366397f6c0515af6b9b8aa4bdd06"
	I1206 10:14:37.185524  497305 cri.go:89] found id: "c53340c2393c0b3642954671262ecfd1669b5cf00c3682409e1452a943becd27"
	I1206 10:14:37.185528  497305 cri.go:89] found id: "d2c1eed3e4df19803bddda45d1cc596ba92381d494b9bef49dc118075e0e83f3"
	I1206 10:14:37.185531  497305 cri.go:89] found id: "bb2cff19695f37ce069323cfab91760c1fe220c0a3edfc6d40f5233021eafcf3"
	I1206 10:14:37.185534  497305 cri.go:89] found id: "c1f6dd47829edac6b0e0c655e8eda525208d5a754d69d91b2b59d4a9d1200f84"
	I1206 10:14:37.185539  497305 cri.go:89] found id: "0ee8c78f93030e10d80b5a240b46a2f842e44c5e7f15b05425a8ff1f45bee309"
	I1206 10:14:37.185542  497305 cri.go:89] found id: "8372b3ca93930cefd069b1589642fa189999760e5a312f2852a05f1c57eef85b"
	I1206 10:14:37.185544  497305 cri.go:89] found id: "5450d6d68764d73b5b2dff2156681b12550ff54b9d5d6ed472c15683bbf31d5e"
	I1206 10:14:37.185548  497305 cri.go:89] found id: ""
	I1206 10:14:37.185602  497305 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:14:37.200802  497305 out.go:203] 
	W1206 10:14:37.203753  497305 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:14:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:14:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:14:37.203780  497305 out.go:285] * 
	* 
	W1206 10:14:37.210317  497305 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:14:37.213212  497305 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-463201 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-463201 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-463201 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (263.835994ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:14:37.275089  497348 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:14:37.275928  497348 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:14:37.275944  497348 out.go:374] Setting ErrFile to fd 2...
	I1206 10:14:37.275951  497348 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:14:37.276267  497348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:14:37.276605  497348 mustload.go:66] Loading cluster: addons-463201
	I1206 10:14:37.277032  497348 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:14:37.277054  497348 addons.go:622] checking whether the cluster is paused
	I1206 10:14:37.277202  497348 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:14:37.277221  497348 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:14:37.277759  497348 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:14:37.295572  497348 ssh_runner.go:195] Run: systemctl --version
	I1206 10:14:37.295640  497348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:14:37.312304  497348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:14:37.417941  497348 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:14:37.418043  497348 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:14:37.449319  497348 cri.go:89] found id: "5e82e51db5bae25edf1ad1f508561386faaf99fd749011f29643c994073fa82b"
	I1206 10:14:37.449340  497348 cri.go:89] found id: "e77ca233e95107b4650f789b058a9248eeafd977bfd0e3e94a8d154fb5be3203"
	I1206 10:14:37.449344  497348 cri.go:89] found id: "c4503b391c863e57c177f1e38dbf4df0e7bf25a9bfce9636b16547f1716b56f8"
	I1206 10:14:37.449349  497348 cri.go:89] found id: "d9cc152585cdfdae2e9fa73bb5a04625634fc98fdb53f8478f515311bf505603"
	I1206 10:14:37.449353  497348 cri.go:89] found id: "9744230520efec6c5427d659b020b8aebb0b8f60700b10a90e7568fb4f02feeb"
	I1206 10:14:37.449357  497348 cri.go:89] found id: "86fe541e9ba32ec7aff4e42066e190257fa61374b4e4a034d6e07bd405c91ae1"
	I1206 10:14:37.449360  497348 cri.go:89] found id: "db3f01d09c58d50b8b04347a9153def1aa334e50b6801cc14c91854950ca696a"
	I1206 10:14:37.449364  497348 cri.go:89] found id: "4c87ded8b1fe686fcfd044face4400839c48d991708ed4488d50c44e540be635"
	I1206 10:14:37.449367  497348 cri.go:89] found id: "daa185ec097b31f4e9c67362d59fdb4151fc734d34ff93626ef2bcc21dd41036"
	I1206 10:14:37.449375  497348 cri.go:89] found id: "219b6958171610a9c3b443e6cc8356719046bd89709701802b0a11664a7582b7"
	I1206 10:14:37.449378  497348 cri.go:89] found id: "5d4b33e25d2b558bac3cb545fadd5a94e7ab0be1221ebd35901d30915d5be267"
	I1206 10:14:37.449381  497348 cri.go:89] found id: "f89b62b37376ad66088e24fafa491c343778d9dd1380d9f7fdfdb93b4d59ba53"
	I1206 10:14:37.449384  497348 cri.go:89] found id: "e83032278589ef0f7ecb2c85a3dcc4c6b5b9568e74082e79289f5260ebb38645"
	I1206 10:14:37.449388  497348 cri.go:89] found id: "0f7b25b5f8b12a8b79f60d93bb10fb9ef6cda6c774739cae2d1ce2050af758c1"
	I1206 10:14:37.449392  497348 cri.go:89] found id: "51c50d8be4bdb4c0166e709b97f80c463fbbc2f2f018a946cf77419a29b72cda"
	I1206 10:14:37.449401  497348 cri.go:89] found id: "775995b1bde6256f0e91cb2ba08cf0f4b811366397f6c0515af6b9b8aa4bdd06"
	I1206 10:14:37.449404  497348 cri.go:89] found id: "c53340c2393c0b3642954671262ecfd1669b5cf00c3682409e1452a943becd27"
	I1206 10:14:37.449409  497348 cri.go:89] found id: "d2c1eed3e4df19803bddda45d1cc596ba92381d494b9bef49dc118075e0e83f3"
	I1206 10:14:37.449413  497348 cri.go:89] found id: "bb2cff19695f37ce069323cfab91760c1fe220c0a3edfc6d40f5233021eafcf3"
	I1206 10:14:37.449416  497348 cri.go:89] found id: "c1f6dd47829edac6b0e0c655e8eda525208d5a754d69d91b2b59d4a9d1200f84"
	I1206 10:14:37.449420  497348 cri.go:89] found id: "0ee8c78f93030e10d80b5a240b46a2f842e44c5e7f15b05425a8ff1f45bee309"
	I1206 10:14:37.449424  497348 cri.go:89] found id: "8372b3ca93930cefd069b1589642fa189999760e5a312f2852a05f1c57eef85b"
	I1206 10:14:37.449427  497348 cri.go:89] found id: "5450d6d68764d73b5b2dff2156681b12550ff54b9d5d6ed472c15683bbf31d5e"
	I1206 10:14:37.449429  497348 cri.go:89] found id: ""
	I1206 10:14:37.449509  497348 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:14:37.464674  497348 out.go:203] 
	W1206 10:14:37.467572  497348 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:14:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:14:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:14:37.467601  497348 out.go:285] * 
	* 
	W1206 10:14:37.474048  497348 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:14:37.477068  497348 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-463201 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (29.48s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (4.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-463201 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-463201 --alsologtostderr -v=1: exit status 11 (399.146212ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:14:08.035772  495853 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:14:08.036588  495853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:14:08.036602  495853 out.go:374] Setting ErrFile to fd 2...
	I1206 10:14:08.036609  495853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:14:08.036916  495853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:14:08.037257  495853 mustload.go:66] Loading cluster: addons-463201
	I1206 10:14:08.037653  495853 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:14:08.037665  495853 addons.go:622] checking whether the cluster is paused
	I1206 10:14:08.037775  495853 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:14:08.037785  495853 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:14:08.038371  495853 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:14:08.088654  495853 ssh_runner.go:195] Run: systemctl --version
	I1206 10:14:08.088712  495853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:14:08.128495  495853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:14:08.261420  495853 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:14:08.261518  495853 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:14:08.309433  495853 cri.go:89] found id: "5e82e51db5bae25edf1ad1f508561386faaf99fd749011f29643c994073fa82b"
	I1206 10:14:08.309457  495853 cri.go:89] found id: "e77ca233e95107b4650f789b058a9248eeafd977bfd0e3e94a8d154fb5be3203"
	I1206 10:14:08.309462  495853 cri.go:89] found id: "c4503b391c863e57c177f1e38dbf4df0e7bf25a9bfce9636b16547f1716b56f8"
	I1206 10:14:08.309466  495853 cri.go:89] found id: "d9cc152585cdfdae2e9fa73bb5a04625634fc98fdb53f8478f515311bf505603"
	I1206 10:14:08.309477  495853 cri.go:89] found id: "9744230520efec6c5427d659b020b8aebb0b8f60700b10a90e7568fb4f02feeb"
	I1206 10:14:08.309482  495853 cri.go:89] found id: "86fe541e9ba32ec7aff4e42066e190257fa61374b4e4a034d6e07bd405c91ae1"
	I1206 10:14:08.309485  495853 cri.go:89] found id: "db3f01d09c58d50b8b04347a9153def1aa334e50b6801cc14c91854950ca696a"
	I1206 10:14:08.309492  495853 cri.go:89] found id: "4c87ded8b1fe686fcfd044face4400839c48d991708ed4488d50c44e540be635"
	I1206 10:14:08.309498  495853 cri.go:89] found id: "daa185ec097b31f4e9c67362d59fdb4151fc734d34ff93626ef2bcc21dd41036"
	I1206 10:14:08.309504  495853 cri.go:89] found id: "219b6958171610a9c3b443e6cc8356719046bd89709701802b0a11664a7582b7"
	I1206 10:14:08.309508  495853 cri.go:89] found id: "5d4b33e25d2b558bac3cb545fadd5a94e7ab0be1221ebd35901d30915d5be267"
	I1206 10:14:08.309511  495853 cri.go:89] found id: "f89b62b37376ad66088e24fafa491c343778d9dd1380d9f7fdfdb93b4d59ba53"
	I1206 10:14:08.309514  495853 cri.go:89] found id: "e83032278589ef0f7ecb2c85a3dcc4c6b5b9568e74082e79289f5260ebb38645"
	I1206 10:14:08.309516  495853 cri.go:89] found id: "0f7b25b5f8b12a8b79f60d93bb10fb9ef6cda6c774739cae2d1ce2050af758c1"
	I1206 10:14:08.309519  495853 cri.go:89] found id: "51c50d8be4bdb4c0166e709b97f80c463fbbc2f2f018a946cf77419a29b72cda"
	I1206 10:14:08.309524  495853 cri.go:89] found id: "775995b1bde6256f0e91cb2ba08cf0f4b811366397f6c0515af6b9b8aa4bdd06"
	I1206 10:14:08.309528  495853 cri.go:89] found id: "c53340c2393c0b3642954671262ecfd1669b5cf00c3682409e1452a943becd27"
	I1206 10:14:08.309532  495853 cri.go:89] found id: "d2c1eed3e4df19803bddda45d1cc596ba92381d494b9bef49dc118075e0e83f3"
	I1206 10:14:08.309535  495853 cri.go:89] found id: "bb2cff19695f37ce069323cfab91760c1fe220c0a3edfc6d40f5233021eafcf3"
	I1206 10:14:08.309538  495853 cri.go:89] found id: "c1f6dd47829edac6b0e0c655e8eda525208d5a754d69d91b2b59d4a9d1200f84"
	I1206 10:14:08.309543  495853 cri.go:89] found id: "0ee8c78f93030e10d80b5a240b46a2f842e44c5e7f15b05425a8ff1f45bee309"
	I1206 10:14:08.309546  495853 cri.go:89] found id: "8372b3ca93930cefd069b1589642fa189999760e5a312f2852a05f1c57eef85b"
	I1206 10:14:08.309549  495853 cri.go:89] found id: "5450d6d68764d73b5b2dff2156681b12550ff54b9d5d6ed472c15683bbf31d5e"
	I1206 10:14:08.309552  495853 cri.go:89] found id: ""
	I1206 10:14:08.309708  495853 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:14:08.325773  495853 out.go:203] 
	W1206 10:14:08.328772  495853 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:14:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:14:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:14:08.328799  495853 out.go:285] * 
	* 
	W1206 10:14:08.335595  495853 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:14:08.338515  495853 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-463201 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-463201
helpers_test.go:243: (dbg) docker inspect addons-463201:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c07cf5a07d38c5c0b61d0eca204384ecbf549b9785b414eca3aabe03152971dd",
	        "Created": "2025-12-06T10:11:08.484782916Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 489462,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:11:08.55279692Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/c07cf5a07d38c5c0b61d0eca204384ecbf549b9785b414eca3aabe03152971dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c07cf5a07d38c5c0b61d0eca204384ecbf549b9785b414eca3aabe03152971dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/c07cf5a07d38c5c0b61d0eca204384ecbf549b9785b414eca3aabe03152971dd/hosts",
	        "LogPath": "/var/lib/docker/containers/c07cf5a07d38c5c0b61d0eca204384ecbf549b9785b414eca3aabe03152971dd/c07cf5a07d38c5c0b61d0eca204384ecbf549b9785b414eca3aabe03152971dd-json.log",
	        "Name": "/addons-463201",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-463201:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-463201",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c07cf5a07d38c5c0b61d0eca204384ecbf549b9785b414eca3aabe03152971dd",
	                "LowerDir": "/var/lib/docker/overlay2/471c74bf40180ffd113d45a194c9694660f8335b68d71832a9104dfb002b441e-init/diff:/var/lib/docker/overlay2/cc06c0f1f442a7275dc247974ca9074508813cfb842de89bc5bb1dae1e824222/diff",
	                "MergedDir": "/var/lib/docker/overlay2/471c74bf40180ffd113d45a194c9694660f8335b68d71832a9104dfb002b441e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/471c74bf40180ffd113d45a194c9694660f8335b68d71832a9104dfb002b441e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/471c74bf40180ffd113d45a194c9694660f8335b68d71832a9104dfb002b441e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-463201",
	                "Source": "/var/lib/docker/volumes/addons-463201/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-463201",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-463201",
	                "name.minikube.sigs.k8s.io": "addons-463201",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cea12389f1ee0eca5fa8bb4ed39747af46d36780140f393641f05623c4d3f35c",
	            "SandboxKey": "/var/run/docker/netns/cea12389f1ee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-463201": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:8f:1c:2e:15:ae",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5ab4f6c223ab761207584f4f06b2cac589555936c8e4d745cd40ab7028f06221",
	                    "EndpointID": "5aa636b3ff4619d7ae6aadb26df1a1f81577ec1cfb6e189cf9b0d10f13e9977e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-463201",
	                        "c07cf5a07d38"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-463201 -n addons-463201
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-463201 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-463201 logs -n 25: (1.782191495s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 06 Dec 25 10:10 UTC │ 06 Dec 25 10:10 UTC │
	│ delete  │ -p download-only-629505                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-629505   │ jenkins │ v1.37.0 │ 06 Dec 25 10:10 UTC │ 06 Dec 25 10:10 UTC │
	│ start   │ -o=json --download-only -p download-only-273530 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-273530   │ jenkins │ v1.37.0 │ 06 Dec 25 10:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 06 Dec 25 10:11 UTC │ 06 Dec 25 10:11 UTC │
	│ delete  │ -p download-only-273530                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-273530   │ jenkins │ v1.37.0 │ 06 Dec 25 10:11 UTC │ 06 Dec 25 10:11 UTC │
	│ delete  │ -p download-only-521939                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-521939   │ jenkins │ v1.37.0 │ 06 Dec 25 10:11 UTC │ 06 Dec 25 10:11 UTC │
	│ delete  │ -p download-only-629505                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-629505   │ jenkins │ v1.37.0 │ 06 Dec 25 10:11 UTC │ 06 Dec 25 10:11 UTC │
	│ delete  │ -p download-only-273530                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-273530   │ jenkins │ v1.37.0 │ 06 Dec 25 10:11 UTC │ 06 Dec 25 10:11 UTC │
	│ start   │ --download-only -p download-docker-001536 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-001536 │ jenkins │ v1.37.0 │ 06 Dec 25 10:11 UTC │                     │
	│ delete  │ -p download-docker-001536                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-001536 │ jenkins │ v1.37.0 │ 06 Dec 25 10:11 UTC │ 06 Dec 25 10:11 UTC │
	│ start   │ --download-only -p binary-mirror-243935 --alsologtostderr --binary-mirror http://127.0.0.1:42811 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-243935   │ jenkins │ v1.37.0 │ 06 Dec 25 10:11 UTC │                     │
	│ delete  │ -p binary-mirror-243935                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-243935   │ jenkins │ v1.37.0 │ 06 Dec 25 10:11 UTC │ 06 Dec 25 10:11 UTC │
	│ addons  │ disable dashboard -p addons-463201                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:11 UTC │                     │
	│ addons  │ enable dashboard -p addons-463201                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:11 UTC │                     │
	│ start   │ -p addons-463201 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:11 UTC │ 06 Dec 25 10:13 UTC │
	│ addons  │ addons-463201 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:13 UTC │                     │
	│ addons  │ addons-463201 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:13 UTC │                     │
	│ addons  │ addons-463201 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:13 UTC │                     │
	│ addons  │ addons-463201 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:13 UTC │                     │
	│ ip      │ addons-463201 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:14 UTC │ 06 Dec 25 10:14 UTC │
	│ addons  │ addons-463201 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:14 UTC │                     │
	│ ssh     │ addons-463201 ssh cat /opt/local-path-provisioner/pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:14 UTC │ 06 Dec 25 10:14 UTC │
	│ addons  │ addons-463201 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:14 UTC │                     │
	│ addons  │ addons-463201 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:14 UTC │                     │
	│ addons  │ enable headlamp -p addons-463201 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-463201          │ jenkins │ v1.37.0 │ 06 Dec 25 10:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:11:02
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:11:02.269754  489065 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:11:02.269946  489065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:11:02.269988  489065 out.go:374] Setting ErrFile to fd 2...
	I1206 10:11:02.270011  489065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:11:02.270374  489065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:11:02.270962  489065 out.go:368] Setting JSON to false
	I1206 10:11:02.272348  489065 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10414,"bootTime":1765005449,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:11:02.272455  489065 start.go:143] virtualization:  
	I1206 10:11:02.275746  489065 out.go:179] * [addons-463201] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:11:02.279380  489065 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:11:02.279506  489065 notify.go:221] Checking for updates...
	I1206 10:11:02.285080  489065 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:11:02.288029  489065 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:11:02.290818  489065 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:11:02.293719  489065 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:11:02.296699  489065 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:11:02.299709  489065 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:11:02.327195  489065 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:11:02.327321  489065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:11:02.386522  489065 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-06 10:11:02.377345521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:11:02.386626  489065 docker.go:319] overlay module found
	I1206 10:11:02.389850  489065 out.go:179] * Using the docker driver based on user configuration
	I1206 10:11:02.392781  489065 start.go:309] selected driver: docker
	I1206 10:11:02.392824  489065 start.go:927] validating driver "docker" against <nil>
	I1206 10:11:02.392839  489065 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:11:02.393691  489065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:11:02.453306  489065 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-06 10:11:02.444232934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:11:02.453465  489065 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 10:11:02.453700  489065 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:11:02.456659  489065 out.go:179] * Using Docker driver with root privileges
	I1206 10:11:02.459518  489065 cni.go:84] Creating CNI manager for ""
	I1206 10:11:02.459589  489065 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:11:02.459603  489065 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 10:11:02.459701  489065 start.go:353] cluster config:
	{Name:addons-463201 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-463201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1206 10:11:02.462832  489065 out.go:179] * Starting "addons-463201" primary control-plane node in "addons-463201" cluster
	I1206 10:11:02.465781  489065 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:11:02.468908  489065 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:11:02.471818  489065 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 10:11:02.471876  489065 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1206 10:11:02.471889  489065 cache.go:65] Caching tarball of preloaded images
	I1206 10:11:02.471893  489065 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:11:02.471971  489065 preload.go:238] Found /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:11:02.471982  489065 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 10:11:02.472365  489065 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/config.json ...
	I1206 10:11:02.472384  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/config.json: {Name:mk4121c83f831a50388a5d275aaf3116a37fda3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:02.491579  489065 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:11:02.491604  489065 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:11:02.491624  489065 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:11:02.491657  489065 start.go:360] acquireMachinesLock for addons-463201: {Name:mke5e16993baf13ed5da7fa3be575b8b2edba38c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:11:02.491780  489065 start.go:364] duration metric: took 102.144µs to acquireMachinesLock for "addons-463201"
	I1206 10:11:02.491811  489065 start.go:93] Provisioning new machine with config: &{Name:addons-463201 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-463201 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:11:02.491879  489065 start.go:125] createHost starting for "" (driver="docker")
	I1206 10:11:02.495228  489065 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1206 10:11:02.495462  489065 start.go:159] libmachine.API.Create for "addons-463201" (driver="docker")
	I1206 10:11:02.495502  489065 client.go:173] LocalClient.Create starting
	I1206 10:11:02.495638  489065 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem
	I1206 10:11:02.793783  489065 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem
	I1206 10:11:03.144660  489065 cli_runner.go:164] Run: docker network inspect addons-463201 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 10:11:03.162973  489065 cli_runner.go:211] docker network inspect addons-463201 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 10:11:03.163056  489065 network_create.go:284] running [docker network inspect addons-463201] to gather additional debugging logs...
	I1206 10:11:03.163079  489065 cli_runner.go:164] Run: docker network inspect addons-463201
	W1206 10:11:03.180429  489065 cli_runner.go:211] docker network inspect addons-463201 returned with exit code 1
	I1206 10:11:03.180461  489065 network_create.go:287] error running [docker network inspect addons-463201]: docker network inspect addons-463201: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-463201 not found
	I1206 10:11:03.180489  489065 network_create.go:289] output of [docker network inspect addons-463201]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-463201 not found
	
	** /stderr **
	I1206 10:11:03.180584  489065 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:11:03.197239  489065 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018c7930}
	I1206 10:11:03.197279  489065 network_create.go:124] attempt to create docker network addons-463201 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1206 10:11:03.197344  489065 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-463201 addons-463201
	I1206 10:11:03.260960  489065 network_create.go:108] docker network addons-463201 192.168.49.0/24 created
	I1206 10:11:03.260993  489065 kic.go:121] calculated static IP "192.168.49.2" for the "addons-463201" container
	I1206 10:11:03.261066  489065 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 10:11:03.277897  489065 cli_runner.go:164] Run: docker volume create addons-463201 --label name.minikube.sigs.k8s.io=addons-463201 --label created_by.minikube.sigs.k8s.io=true
	I1206 10:11:03.296953  489065 oci.go:103] Successfully created a docker volume addons-463201
	I1206 10:11:03.297040  489065 cli_runner.go:164] Run: docker run --rm --name addons-463201-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-463201 --entrypoint /usr/bin/test -v addons-463201:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 10:11:04.434538  489065 cli_runner.go:217] Completed: docker run --rm --name addons-463201-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-463201 --entrypoint /usr/bin/test -v addons-463201:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib: (1.137443232s)
	I1206 10:11:04.434568  489065 oci.go:107] Successfully prepared a docker volume addons-463201
	I1206 10:11:04.434611  489065 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 10:11:04.434622  489065 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 10:11:04.434689  489065 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-463201:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 10:11:08.405952  489065 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-463201:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.971222465s)
	I1206 10:11:08.405986  489065 kic.go:203] duration metric: took 3.971359875s to extract preloaded images to volume ...
	W1206 10:11:08.406134  489065 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 10:11:08.406247  489065 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 10:11:08.461925  489065 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-463201 --name addons-463201 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-463201 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-463201 --network addons-463201 --ip 192.168.49.2 --volume addons-463201:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 10:11:08.781254  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Running}}
	I1206 10:11:08.810212  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:08.836004  489065 cli_runner.go:164] Run: docker exec addons-463201 stat /var/lib/dpkg/alternatives/iptables
	I1206 10:11:08.906686  489065 oci.go:144] the created container "addons-463201" has a running status.
	I1206 10:11:08.906712  489065 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa...
	I1206 10:11:09.196957  489065 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 10:11:09.224761  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:09.250676  489065 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 10:11:09.250701  489065 kic_runner.go:114] Args: [docker exec --privileged addons-463201 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 10:11:09.307223  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:09.325186  489065 machine.go:94] provisionDockerMachine start ...
	I1206 10:11:09.325292  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:09.343250  489065 main.go:143] libmachine: Using SSH client type: native
	I1206 10:11:09.343578  489065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1206 10:11:09.343593  489065 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:11:09.344279  489065 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 10:11:12.498909  489065 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-463201
	
	I1206 10:11:12.498933  489065 ubuntu.go:182] provisioning hostname "addons-463201"
	I1206 10:11:12.499002  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:12.517843  489065 main.go:143] libmachine: Using SSH client type: native
	I1206 10:11:12.518219  489065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1206 10:11:12.518238  489065 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-463201 && echo "addons-463201" | sudo tee /etc/hostname
	I1206 10:11:12.681238  489065 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-463201
	
	I1206 10:11:12.681373  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:12.700299  489065 main.go:143] libmachine: Using SSH client type: native
	I1206 10:11:12.700611  489065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1206 10:11:12.700634  489065 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-463201' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-463201/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-463201' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:11:12.855593  489065 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:11:12.855618  489065 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-484819/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-484819/.minikube}
	I1206 10:11:12.855641  489065 ubuntu.go:190] setting up certificates
	I1206 10:11:12.855651  489065 provision.go:84] configureAuth start
	I1206 10:11:12.855743  489065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-463201
	I1206 10:11:12.872432  489065 provision.go:143] copyHostCerts
	I1206 10:11:12.872523  489065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem (1082 bytes)
	I1206 10:11:12.872651  489065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem (1123 bytes)
	I1206 10:11:12.872704  489065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem (1675 bytes)
	I1206 10:11:12.872750  489065 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem org=jenkins.addons-463201 san=[127.0.0.1 192.168.49.2 addons-463201 localhost minikube]
	I1206 10:11:13.054543  489065 provision.go:177] copyRemoteCerts
	I1206 10:11:13.054614  489065 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:11:13.054655  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:13.072005  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:13.179085  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 10:11:13.197075  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:11:13.214820  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 10:11:13.233154  489065 provision.go:87] duration metric: took 377.478573ms to configureAuth
	I1206 10:11:13.233181  489065 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:11:13.233371  489065 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:11:13.233486  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:13.249904  489065 main.go:143] libmachine: Using SSH client type: native
	I1206 10:11:13.250233  489065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1206 10:11:13.250253  489065 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:11:13.728181  489065 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:11:13.728206  489065 machine.go:97] duration metric: took 4.402996843s to provisionDockerMachine
	I1206 10:11:13.728216  489065 client.go:176] duration metric: took 11.232703569s to LocalClient.Create
	I1206 10:11:13.728229  489065 start.go:167] duration metric: took 11.232768486s to libmachine.API.Create "addons-463201"
	I1206 10:11:13.728237  489065 start.go:293] postStartSetup for "addons-463201" (driver="docker")
	I1206 10:11:13.728253  489065 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:11:13.728324  489065 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:11:13.728386  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:13.750415  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:13.855568  489065 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:11:13.858958  489065 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:11:13.858986  489065 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:11:13.859000  489065 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/addons for local assets ...
	I1206 10:11:13.859072  489065 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/files for local assets ...
	I1206 10:11:13.859094  489065 start.go:296] duration metric: took 130.845682ms for postStartSetup
	I1206 10:11:13.859450  489065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-463201
	I1206 10:11:13.876285  489065 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/config.json ...
	I1206 10:11:13.876577  489065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:11:13.876642  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:13.893434  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:13.996286  489065 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:11:14.002858  489065 start.go:128] duration metric: took 11.510963595s to createHost
	I1206 10:11:14.002948  489065 start.go:83] releasing machines lock for "addons-463201", held for 11.511154787s
	I1206 10:11:14.003078  489065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-463201
	I1206 10:11:14.027196  489065 ssh_runner.go:195] Run: cat /version.json
	I1206 10:11:14.027248  489065 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:11:14.027319  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:14.027253  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:14.045457  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:14.045893  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:14.150949  489065 ssh_runner.go:195] Run: systemctl --version
	I1206 10:11:14.242685  489065 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:11:14.286319  489065 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:11:14.290743  489065 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:11:14.290841  489065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:11:14.318911  489065 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 10:11:14.318948  489065 start.go:496] detecting cgroup driver to use...
	I1206 10:11:14.318981  489065 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:11:14.319052  489065 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:11:14.336536  489065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:11:14.349423  489065 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:11:14.349519  489065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:11:14.366957  489065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:11:14.385438  489065 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:11:14.504429  489065 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:11:14.658975  489065 docker.go:234] disabling docker service ...
	I1206 10:11:14.659090  489065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:11:14.686749  489065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:11:14.700932  489065 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:11:14.831039  489065 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:11:14.943693  489065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:11:14.957826  489065 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:11:14.972477  489065 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:11:14.972545  489065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:11:14.981569  489065 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:11:14.981662  489065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:11:14.990631  489065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:11:15.006982  489065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:11:15.027263  489065 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:11:15.037567  489065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:11:15.047895  489065 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:11:15.063326  489065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:11:15.073367  489065 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:11:15.081528  489065 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:11:15.089554  489065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:11:15.199546  489065 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:11:15.371267  489065 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:11:15.371394  489065 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:11:15.375264  489065 start.go:564] Will wait 60s for crictl version
	I1206 10:11:15.375342  489065 ssh_runner.go:195] Run: which crictl
	I1206 10:11:15.378925  489065 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:11:15.410728  489065 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:11:15.410859  489065 ssh_runner.go:195] Run: crio --version
	I1206 10:11:15.442718  489065 ssh_runner.go:195] Run: crio --version
	I1206 10:11:15.476347  489065 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 10:11:15.479252  489065 cli_runner.go:164] Run: docker network inspect addons-463201 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:11:15.495453  489065 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:11:15.499480  489065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:11:15.509154  489065 kubeadm.go:884] updating cluster {Name:addons-463201 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-463201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:11:15.509282  489065 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 10:11:15.509339  489065 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:11:15.549759  489065 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:11:15.549785  489065 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:11:15.549841  489065 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:11:15.575079  489065 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:11:15.575103  489065 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:11:15.575111  489065 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1206 10:11:15.575257  489065 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-463201 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-463201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:11:15.575364  489065 ssh_runner.go:195] Run: crio config
	I1206 10:11:15.646595  489065 cni.go:84] Creating CNI manager for ""
	I1206 10:11:15.646618  489065 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:11:15.646634  489065 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:11:15.646658  489065 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-463201 NodeName:addons-463201 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:11:15.646792  489065 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-463201"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:11:15.646869  489065 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 10:11:15.654693  489065 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:11:15.654813  489065 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:11:15.662402  489065 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1206 10:11:15.674984  489065 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 10:11:15.688520  489065 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1206 10:11:15.701891  489065 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:11:15.706749  489065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:11:15.717000  489065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:11:15.858291  489065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:11:15.876427  489065 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201 for IP: 192.168.49.2
	I1206 10:11:15.876450  489065 certs.go:195] generating shared ca certs ...
	I1206 10:11:15.876466  489065 certs.go:227] acquiring lock for ca certs: {Name:mk654f77abd8383620ce6ddae56f2a6a8c1d96d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:15.876680  489065 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key
	I1206 10:11:16.045942  489065 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt ...
	I1206 10:11:16.045975  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt: {Name:mk3118e07c47df7b147fdee8b9a1528f37e11089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:16.046218  489065 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key ...
	I1206 10:11:16.046234  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key: {Name:mk8407584158b4a98229fa6be2ab9e28cf251cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:16.046330  489065 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key
	I1206 10:11:16.267154  489065 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt ...
	I1206 10:11:16.267185  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt: {Name:mk8bfec8e2ea314a06020d5dc8d08c9364ec5f13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:16.267362  489065 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key ...
	I1206 10:11:16.267372  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key: {Name:mkd100ae304037f849fef9c412ec498fd7af0314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:16.267459  489065 certs.go:257] generating profile certs ...
	I1206 10:11:16.267521  489065 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.key
	I1206 10:11:16.267539  489065 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt with IP's: []
	I1206 10:11:16.379520  489065 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt ...
	I1206 10:11:16.379589  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: {Name:mk27499a800b2fbf1affd96cacf4ca3c735b011c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:16.379771  489065 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.key ...
	I1206 10:11:16.379786  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.key: {Name:mk58fadde495cc32056c31562edb06cc7ec3af9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:16.379889  489065 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.key.da4af44c
	I1206 10:11:16.379908  489065 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.crt.da4af44c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1206 10:11:16.436080  489065 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.crt.da4af44c ...
	I1206 10:11:16.436108  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.crt.da4af44c: {Name:mk885a56a7d3b4747cb1c2be691a25e351cd2427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:16.436296  489065 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.key.da4af44c ...
	I1206 10:11:16.436313  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.key.da4af44c: {Name:mk0b4adc52d4ea35e8b10c39e6a7fe76de1140ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:16.436405  489065 certs.go:382] copying /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.crt.da4af44c -> /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.crt
	I1206 10:11:16.436489  489065 certs.go:386] copying /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.key.da4af44c -> /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.key
	I1206 10:11:16.436547  489065 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/proxy-client.key
	I1206 10:11:16.436566  489065 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/proxy-client.crt with IP's: []
	I1206 10:11:17.292151  489065 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/proxy-client.crt ...
	I1206 10:11:17.292185  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/proxy-client.crt: {Name:mk5c3cdb3197d7b3590a183324c49c9c6943febe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:17.292382  489065 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/proxy-client.key ...
	I1206 10:11:17.292396  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/proxy-client.key: {Name:mka2e3533bb772896abff489310977c7be04e583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:17.292591  489065 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 10:11:17.292636  489065 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:11:17.292669  489065 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:11:17.292705  489065 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem (1675 bytes)
	I1206 10:11:17.293284  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:11:17.311761  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:11:17.331058  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:11:17.349590  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 10:11:17.368475  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 10:11:17.385758  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 10:11:17.402970  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:11:17.421060  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:11:17.441923  489065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:11:17.461692  489065 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:11:17.476056  489065 ssh_runner.go:195] Run: openssl version
	I1206 10:11:17.485202  489065 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:11:17.492424  489065 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:11:17.500103  489065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:11:17.504428  489065 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:11:17.504552  489065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:11:17.545792  489065 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:11:17.553772  489065 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 10:11:17.561273  489065 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:11:17.564781  489065 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 10:11:17.564830  489065 kubeadm.go:401] StartCluster: {Name:addons-463201 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-463201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:11:17.564903  489065 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:11:17.564962  489065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:11:17.592859  489065 cri.go:89] found id: ""
	I1206 10:11:17.592939  489065 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:11:17.601161  489065 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:11:17.608969  489065 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:11:17.609052  489065 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:11:17.616875  489065 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:11:17.616906  489065 kubeadm.go:158] found existing configuration files:
	
	I1206 10:11:17.616968  489065 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 10:11:17.624853  489065 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:11:17.624921  489065 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:11:17.632342  489065 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 10:11:17.640197  489065 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:11:17.640276  489065 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:11:17.648009  489065 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 10:11:17.655795  489065 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:11:17.655906  489065 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:11:17.663588  489065 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 10:11:17.671327  489065 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:11:17.671413  489065 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:11:17.679013  489065 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:11:17.723926  489065 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 10:11:17.724247  489065 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:11:17.747604  489065 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:11:17.747757  489065 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:11:17.747837  489065 kubeadm.go:319] OS: Linux
	I1206 10:11:17.747931  489065 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:11:17.748026  489065 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:11:17.748133  489065 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:11:17.748228  489065 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:11:17.748324  489065 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:11:17.748414  489065 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:11:17.748494  489065 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:11:17.748575  489065 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:11:17.748657  489065 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:11:17.812048  489065 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:11:17.812163  489065 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:11:17.812259  489065 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:11:17.823524  489065 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:11:17.829584  489065 out.go:252]   - Generating certificates and keys ...
	I1206 10:11:17.829681  489065 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:11:17.829756  489065 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:11:18.449376  489065 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 10:11:19.180286  489065 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 10:11:19.859593  489065 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 10:11:20.215542  489065 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 10:11:21.447289  489065 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 10:11:21.447680  489065 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-463201 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 10:11:21.898827  489065 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 10:11:21.898965  489065 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-463201 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 10:11:22.064274  489065 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 10:11:22.401308  489065 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 10:11:23.085929  489065 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 10:11:23.086342  489065 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:11:23.241877  489065 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:11:23.442731  489065 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:11:24.923661  489065 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:11:25.866232  489065 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:11:26.487963  489065 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:11:26.489074  489065 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:11:26.492103  489065 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:11:26.495635  489065 out.go:252]   - Booting up control plane ...
	I1206 10:11:26.495745  489065 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:11:26.495824  489065 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:11:26.497061  489065 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:11:26.518357  489065 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:11:26.518472  489065 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:11:26.526760  489065 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:11:26.526863  489065 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:11:26.526903  489065 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:11:26.656781  489065 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:11:26.656902  489065 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:11:27.657406  489065 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000907458s
	I1206 10:11:27.663522  489065 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 10:11:27.663631  489065 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1206 10:11:27.663721  489065 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 10:11:27.663805  489065 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 10:11:30.783893  489065 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.120820254s
	I1206 10:11:32.539254  489065 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.876665656s
	I1206 10:11:34.164406  489065 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501524263s
	I1206 10:11:34.198095  489065 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 10:11:34.214027  489065 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 10:11:34.232206  489065 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 10:11:34.232412  489065 kubeadm.go:319] [mark-control-plane] Marking the node addons-463201 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 10:11:34.247769  489065 kubeadm.go:319] [bootstrap-token] Using token: 1a4eby.57hnzzmqzzg8bz87
	I1206 10:11:34.250765  489065 out.go:252]   - Configuring RBAC rules ...
	I1206 10:11:34.250903  489065 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 10:11:34.261120  489065 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 10:11:34.273801  489065 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 10:11:34.278959  489065 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 10:11:34.286250  489065 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 10:11:34.294158  489065 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 10:11:34.571859  489065 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 10:11:35.016721  489065 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 10:11:35.571386  489065 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 10:11:35.572495  489065 kubeadm.go:319] 
	I1206 10:11:35.572593  489065 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 10:11:35.572607  489065 kubeadm.go:319] 
	I1206 10:11:35.572709  489065 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 10:11:35.572715  489065 kubeadm.go:319] 
	I1206 10:11:35.572765  489065 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 10:11:35.572849  489065 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 10:11:35.572906  489065 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 10:11:35.572916  489065 kubeadm.go:319] 
	I1206 10:11:35.572985  489065 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 10:11:35.572995  489065 kubeadm.go:319] 
	I1206 10:11:35.573054  489065 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 10:11:35.573065  489065 kubeadm.go:319] 
	I1206 10:11:35.573135  489065 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 10:11:35.573223  489065 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 10:11:35.573297  489065 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 10:11:35.573309  489065 kubeadm.go:319] 
	I1206 10:11:35.573398  489065 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 10:11:35.573477  489065 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 10:11:35.573485  489065 kubeadm.go:319] 
	I1206 10:11:35.573569  489065 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1a4eby.57hnzzmqzzg8bz87 \
	I1206 10:11:35.573676  489065 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:17404bed1e42f06c637e5edf0fb99872362c7da5e3019dba692c7ce2802c61f1 \
	I1206 10:11:35.573701  489065 kubeadm.go:319] 	--control-plane 
	I1206 10:11:35.573709  489065 kubeadm.go:319] 
	I1206 10:11:35.573794  489065 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 10:11:35.573801  489065 kubeadm.go:319] 
	I1206 10:11:35.573884  489065 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1a4eby.57hnzzmqzzg8bz87 \
	I1206 10:11:35.573990  489065 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:17404bed1e42f06c637e5edf0fb99872362c7da5e3019dba692c7ce2802c61f1 
	I1206 10:11:35.577452  489065 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1206 10:11:35.577682  489065 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:11:35.577792  489065 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:11:35.577815  489065 cni.go:84] Creating CNI manager for ""
	I1206 10:11:35.577823  489065 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:11:35.581101  489065 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 10:11:35.583985  489065 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 10:11:35.588065  489065 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 10:11:35.588085  489065 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 10:11:35.602024  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 10:11:35.899396  489065 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 10:11:35.899543  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:35.899639  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-463201 minikube.k8s.io/updated_at=2025_12_06T10_11_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=addons-463201 minikube.k8s.io/primary=true
	I1206 10:11:36.017699  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:36.078889  489065 ops.go:34] apiserver oom_adj: -16
	I1206 10:11:36.518482  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:37.018675  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:37.518115  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:38.018200  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:38.518519  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:39.018468  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:39.518350  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:40.021979  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:40.518661  489065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:11:40.616346  489065 kubeadm.go:1114] duration metric: took 4.716863675s to wait for elevateKubeSystemPrivileges
	I1206 10:11:40.616380  489065 kubeadm.go:403] duration metric: took 23.051554111s to StartCluster
	I1206 10:11:40.616399  489065 settings.go:142] acquiring lock: {Name:mk7eec112652eae38dac4afce804445d9092bd29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:40.616525  489065 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:11:40.616971  489065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/kubeconfig: {Name:mk884a72161ed5cd0cfdbffc4a21f277282d705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:11:40.617170  489065 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:11:40.617315  489065 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 10:11:40.617574  489065 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:11:40.617617  489065 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1206 10:11:40.617703  489065 addons.go:70] Setting yakd=true in profile "addons-463201"
	I1206 10:11:40.617721  489065 addons.go:239] Setting addon yakd=true in "addons-463201"
	I1206 10:11:40.617743  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.618243  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.618767  489065 addons.go:70] Setting inspektor-gadget=true in profile "addons-463201"
	I1206 10:11:40.618790  489065 addons.go:239] Setting addon inspektor-gadget=true in "addons-463201"
	I1206 10:11:40.618815  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.618938  489065 addons.go:70] Setting metrics-server=true in profile "addons-463201"
	I1206 10:11:40.618954  489065 addons.go:239] Setting addon metrics-server=true in "addons-463201"
	I1206 10:11:40.618974  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.619300  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.619404  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.619826  489065 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-463201"
	I1206 10:11:40.619849  489065 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-463201"
	I1206 10:11:40.619871  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.620269  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.621651  489065 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-463201"
	I1206 10:11:40.621685  489065 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-463201"
	I1206 10:11:40.621721  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.622193  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.627576  489065 addons.go:70] Setting cloud-spanner=true in profile "addons-463201"
	I1206 10:11:40.627610  489065 addons.go:239] Setting addon cloud-spanner=true in "addons-463201"
	I1206 10:11:40.627652  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.628108  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.633292  489065 addons.go:70] Setting registry=true in profile "addons-463201"
	I1206 10:11:40.633375  489065 addons.go:239] Setting addon registry=true in "addons-463201"
	I1206 10:11:40.633427  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.633931  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.637937  489065 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-463201"
	I1206 10:11:40.638003  489065 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-463201"
	I1206 10:11:40.638035  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.638509  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.662888  489065 addons.go:70] Setting default-storageclass=true in profile "addons-463201"
	I1206 10:11:40.662918  489065 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-463201"
	I1206 10:11:40.663273  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.666072  489065 addons.go:70] Setting registry-creds=true in profile "addons-463201"
	I1206 10:11:40.666151  489065 addons.go:239] Setting addon registry-creds=true in "addons-463201"
	I1206 10:11:40.666216  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.666795  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.673569  489065 addons.go:70] Setting gcp-auth=true in profile "addons-463201"
	I1206 10:11:40.673602  489065 mustload.go:66] Loading cluster: addons-463201
	I1206 10:11:40.673821  489065 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:11:40.674083  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.677227  489065 addons.go:70] Setting ingress=true in profile "addons-463201"
	I1206 10:11:40.677252  489065 addons.go:239] Setting addon ingress=true in "addons-463201"
	I1206 10:11:40.677297  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.677750  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.688560  489065 addons.go:70] Setting storage-provisioner=true in profile "addons-463201"
	I1206 10:11:40.688589  489065 addons.go:239] Setting addon storage-provisioner=true in "addons-463201"
	I1206 10:11:40.688624  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.689100  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.692179  489065 addons.go:70] Setting ingress-dns=true in profile "addons-463201"
	I1206 10:11:40.692313  489065 addons.go:239] Setting addon ingress-dns=true in "addons-463201"
	I1206 10:11:40.692358  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.692822  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.712367  489065 out.go:179] * Verifying Kubernetes components...
	I1206 10:11:40.714619  489065 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-463201"
	I1206 10:11:40.714649  489065 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-463201"
	I1206 10:11:40.714992  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.719109  489065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:11:40.734182  489065 addons.go:70] Setting volcano=true in profile "addons-463201"
	I1206 10:11:40.734223  489065 addons.go:239] Setting addon volcano=true in "addons-463201"
	I1206 10:11:40.734258  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.734727  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.768480  489065 addons.go:70] Setting volumesnapshots=true in profile "addons-463201"
	I1206 10:11:40.768510  489065 addons.go:239] Setting addon volumesnapshots=true in "addons-463201"
	I1206 10:11:40.768554  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.769034  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:40.799247  489065 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1206 10:11:40.816872  489065 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1206 10:11:40.827367  489065 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1206 10:11:40.852397  489065 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1206 10:11:40.903642  489065 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 10:11:40.903729  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1206 10:11:40.903868  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:40.893288  489065 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 10:11:40.914544  489065 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 10:11:40.914643  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:40.914969  489065 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1206 10:11:40.915093  489065 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1206 10:11:40.893354  489065 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 10:11:40.928346  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1206 10:11:40.928414  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:40.940153  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.965568  489065 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1206 10:11:40.902572  489065 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1206 10:11:40.966080  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1206 10:11:40.966167  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:40.975350  489065 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1206 10:11:40.976501  489065 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1206 10:11:40.976616  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:40.975571  489065 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 10:11:40.975601  489065 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1206 10:11:40.976359  489065 addons.go:239] Setting addon default-storageclass=true in "addons-463201"
	W1206 10:11:40.978648  489065 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1206 10:11:40.979207  489065 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1206 10:11:40.979221  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1206 10:11:40.979273  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:40.983679  489065 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1206 10:11:40.983776  489065 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1206 10:11:40.985250  489065 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-463201"
	I1206 10:11:40.985285  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:40.985708  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:41.004764  489065 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1206 10:11:41.004787  489065 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1206 10:11:41.004875  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:41.019117  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1206 10:11:41.019219  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:41.051339  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:41.051807  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:41.059966  489065 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1206 10:11:41.060187  489065 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1206 10:11:41.063641  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.064837  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.068633  489065 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:11:41.068888  489065 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 10:11:41.068902  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1206 10:11:41.068961  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:41.082420  489065 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1206 10:11:41.088348  489065 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1206 10:11:41.094199  489065 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1206 10:11:41.097173  489065 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1206 10:11:41.098022  489065 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1206 10:11:41.100000  489065 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:11:41.100030  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:11:41.100110  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:41.122178  489065 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1206 10:11:41.144943  489065 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1206 10:11:41.145124  489065 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1206 10:11:41.152715  489065 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1206 10:11:41.153066  489065 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 10:11:41.153098  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1206 10:11:41.153197  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:41.159218  489065 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1206 10:11:41.159302  489065 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1206 10:11:41.159415  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:41.173658  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.178455  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.187998  489065 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1206 10:11:41.192724  489065 out.go:179]   - Using image docker.io/registry:3.0.0
	I1206 10:11:41.193183  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.197180  489065 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1206 10:11:41.197557  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1206 10:11:41.198041  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:41.201406  489065 out.go:179]   - Using image docker.io/busybox:stable
	I1206 10:11:41.205483  489065 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 10:11:41.205556  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1206 10:11:41.205676  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:41.239201  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.262921  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.264938  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.280840  489065 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:11:41.280860  489065 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:11:41.280922  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:41.306188  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.318376  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.319169  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	W1206 10:11:41.324002  489065 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 10:11:41.324046  489065 retry.go:31] will retry after 144.072424ms: ssh: handshake failed: EOF
	I1206 10:11:41.334766  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.364070  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.364567  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	W1206 10:11:41.366862  489065 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 10:11:41.366886  489065 retry.go:31] will retry after 248.774351ms: ssh: handshake failed: EOF
	I1206 10:11:41.372972  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:41.549943  489065 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 10:11:41.550093  489065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:11:41.787545  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 10:11:41.831419  489065 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1206 10:11:41.831448  489065 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1206 10:11:41.848217  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 10:11:41.853783  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 10:11:41.869340  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 10:11:41.884658  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1206 10:11:41.907644  489065 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1206 10:11:41.907669  489065 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1206 10:11:41.912284  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 10:11:41.924561  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1206 10:11:41.942415  489065 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1206 10:11:41.942454  489065 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1206 10:11:41.966600  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:11:41.995664  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 10:11:41.998191  489065 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1206 10:11:41.998221  489065 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1206 10:11:42.053868  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:11:42.072519  489065 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1206 10:11:42.072561  489065 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1206 10:11:42.076432  489065 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 10:11:42.076464  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1206 10:11:42.280959  489065 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1206 10:11:42.280988  489065 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1206 10:11:42.282181  489065 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1206 10:11:42.282207  489065 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1206 10:11:42.304170  489065 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 10:11:42.304209  489065 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 10:11:42.346480  489065 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1206 10:11:42.346509  489065 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1206 10:11:42.352888  489065 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1206 10:11:42.352931  489065 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1206 10:11:42.412861  489065 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1206 10:11:42.412903  489065 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1206 10:11:42.489366  489065 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1206 10:11:42.489398  489065 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1206 10:11:42.491044  489065 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 10:11:42.491086  489065 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 10:11:42.499504  489065 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1206 10:11:42.499538  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1206 10:11:42.579531  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 10:11:42.583773  489065 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1206 10:11:42.583795  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1206 10:11:42.685171  489065 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1206 10:11:42.685197  489065 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1206 10:11:42.730677  489065 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1206 10:11:42.730748  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1206 10:11:42.736431  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1206 10:11:42.739554  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1206 10:11:42.912637  489065 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1206 10:11:42.912730  489065 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1206 10:11:42.948082  489065 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1206 10:11:42.948143  489065 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1206 10:11:43.118473  489065 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 10:11:43.118570  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1206 10:11:43.130402  489065 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1206 10:11:43.130467  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1206 10:11:43.326561  489065 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1206 10:11:43.326630  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1206 10:11:43.336584  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 10:11:43.575107  489065 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.024969498s)
	I1206 10:11:43.575210  489065 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.025244325s)
	I1206 10:11:43.575373  489065 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1206 10:11:43.576015  489065 node_ready.go:35] waiting up to 6m0s for node "addons-463201" to be "Ready" ...
	I1206 10:11:43.579602  489065 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 10:11:43.579664  489065 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1206 10:11:43.832835  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 10:11:44.079560  489065 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-463201" context rescaled to 1 replicas
	W1206 10:11:45.605128  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:11:46.818821  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.031236956s)
	I1206 10:11:46.818894  489065 addons.go:495] Verifying addon ingress=true in "addons-463201"
	I1206 10:11:46.819112  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.970868664s)
	I1206 10:11:46.819267  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.965454968s)
	I1206 10:11:46.819305  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.949942615s)
	I1206 10:11:46.819367  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.934679644s)
	I1206 10:11:46.819411  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.907103997s)
	I1206 10:11:46.819430  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.894848141s)
	I1206 10:11:46.819469  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.852847309s)
	I1206 10:11:46.819502  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.823808408s)
	I1206 10:11:46.819516  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.765619438s)
	I1206 10:11:46.819569  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.240013928s)
	I1206 10:11:46.820094  489065 addons.go:495] Verifying addon metrics-server=true in "addons-463201"
	I1206 10:11:46.819591  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.083091223s)
	I1206 10:11:46.820112  489065 addons.go:495] Verifying addon registry=true in "addons-463201"
	I1206 10:11:46.819617  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.079999376s)
	I1206 10:11:46.819704  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.483049521s)
	W1206 10:11:46.821492  489065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 10:11:46.821522  489065 retry.go:31] will retry after 143.470615ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 10:11:46.822202  489065 out.go:179] * Verifying ingress addon...
	I1206 10:11:46.824424  489065 out.go:179] * Verifying registry addon...
	I1206 10:11:46.826387  489065 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-463201 service yakd-dashboard -n yakd-dashboard
	
	I1206 10:11:46.827319  489065 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1206 10:11:46.830339  489065 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1206 10:11:46.846963  489065 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1206 10:11:46.848829  489065 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 10:11:46.848854  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:46.849259  489065 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 10:11:46.849277  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:46.966115  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 10:11:47.192012  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.35911936s)
	I1206 10:11:47.192046  489065 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-463201"
	I1206 10:11:47.195149  489065 out.go:179] * Verifying csi-hostpath-driver addon...
	I1206 10:11:47.198420  489065 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1206 10:11:47.206795  489065 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 10:11:47.206824  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:47.333110  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:47.334302  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:47.702527  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:47.831241  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:47.833596  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1206 10:11:48.079179  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:11:48.202514  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:48.330757  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:48.333275  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:48.592467  489065 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1206 10:11:48.592555  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:48.609216  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:48.701952  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:48.719837  489065 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1206 10:11:48.733668  489065 addons.go:239] Setting addon gcp-auth=true in "addons-463201"
	I1206 10:11:48.733721  489065 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:11:48.734161  489065 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:11:48.750646  489065 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1206 10:11:48.750708  489065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:11:48.767537  489065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:11:48.830374  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:48.832582  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:49.202057  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:49.330994  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:49.333081  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:49.702365  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:49.801938  489065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.83577556s)
	I1206 10:11:49.802033  489065 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.051360258s)
	I1206 10:11:49.805291  489065 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1206 10:11:49.808263  489065 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1206 10:11:49.811042  489065 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1206 10:11:49.811061  489065 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1206 10:11:49.824250  489065 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1206 10:11:49.824318  489065 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1206 10:11:49.830216  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:49.833748  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:49.840811  489065 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 10:11:49.840831  489065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1206 10:11:49.853849  489065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	W1206 10:11:50.079601  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:11:50.202415  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:50.348669  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:50.349338  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:50.377136  489065 addons.go:495] Verifying addon gcp-auth=true in "addons-463201"
	I1206 10:11:50.382357  489065 out.go:179] * Verifying gcp-auth addon...
	I1206 10:11:50.386697  489065 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1206 10:11:50.453002  489065 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1206 10:11:50.453028  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:50.701918  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:50.831070  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:50.833497  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:50.890658  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:51.202080  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:51.331152  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:51.333033  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:51.389989  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:51.701288  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:51.830475  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:51.833040  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:51.889594  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:11:52.079753  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:11:52.202421  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:52.330374  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:52.332731  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:52.390301  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:52.702089  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:52.831069  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:52.832927  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:52.889908  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:53.202168  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:53.331005  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:53.332780  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:53.389721  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:53.702178  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:53.831985  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:53.834749  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:53.890261  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:54.201654  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:54.330695  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:54.332756  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:54.389673  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:11:54.579806  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:11:54.701441  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:54.830548  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:54.832866  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:54.890108  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:55.201647  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:55.330688  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:55.333938  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:55.389902  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:55.701957  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:55.830504  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:55.832968  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:55.889848  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:56.202578  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:56.330515  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:56.333039  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:56.390093  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:56.702200  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:56.830308  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:56.832819  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:56.889623  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:11:57.079448  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:11:57.201512  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:57.330462  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:57.332802  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:57.390474  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:57.702005  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:57.831239  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:57.833362  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:57.890220  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:58.202385  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:58.331202  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:58.333128  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:58.389805  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:58.702159  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:58.832077  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:58.833555  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:58.890373  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:11:59.202214  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:59.330306  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:59.334659  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:59.390215  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:11:59.579558  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:11:59.701817  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:11:59.831412  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:11:59.833465  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:11:59.890249  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:00.219146  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:00.350869  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:00.361224  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:00.390977  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:00.702063  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:00.831453  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:00.833810  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:00.890996  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:01.201830  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:01.331585  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:01.333759  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:01.390629  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:12:01.579623  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:12:01.702645  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:01.831705  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:01.833782  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:01.890107  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:02.201937  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:02.331109  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:02.333584  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:02.390329  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:02.702181  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:02.831092  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:02.833396  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:02.890397  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:03.202038  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:03.331350  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:03.333366  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:03.390236  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:03.702035  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:03.831242  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:03.833283  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:03.889980  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:12:04.079972  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:12:04.202583  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:04.330460  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:04.332804  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:04.389567  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:04.702334  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:04.830482  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:04.833221  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:04.890008  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:05.201679  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:05.330994  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:05.333564  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:05.390379  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:05.701662  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:05.830702  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:05.832689  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:05.889647  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:06.202213  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:06.330108  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:06.333726  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:06.390811  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:12:06.579525  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:12:06.701770  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:06.830852  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:06.832719  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:06.890402  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:07.202685  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:07.330979  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:07.333372  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:07.390258  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:07.702143  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:07.831269  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:07.833201  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:07.890030  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:08.202394  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:08.332072  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:08.333540  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:08.390631  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:08.701586  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:08.830251  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:08.833869  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:08.889824  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:12:09.079765  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:12:09.202251  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:09.334812  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:09.335063  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:09.389988  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:09.701715  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:09.830877  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:09.832935  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:09.890209  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:10.203110  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:10.331771  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:10.333069  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:10.390525  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:10.702257  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:10.830980  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:10.833108  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:10.889825  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:11.202243  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:11.331334  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:11.333308  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:11.390125  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:12:11.579429  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:12:11.701641  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:11.830769  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:11.832930  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:11.908458  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:12.201499  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:12.331069  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:12.333206  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:12.389970  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:12.701636  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:12.830897  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:12.832947  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:12.889943  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:13.202185  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:13.331222  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:13.333479  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:13.390189  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:13.702365  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:13.832649  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:13.833925  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:13.890003  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:12:14.079075  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:12:14.202861  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:14.331259  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:14.333254  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:14.390280  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:14.701945  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:14.831268  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:14.833616  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:14.890308  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:15.201966  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:15.331642  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:15.334163  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:15.390004  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:15.701820  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:15.831722  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:15.833770  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:15.890493  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:16.202387  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:16.332040  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:16.333097  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:16.390092  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:12:16.580085  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:12:16.701211  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:16.830208  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:16.833694  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:16.890416  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:17.202050  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:17.330625  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:17.333157  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:17.389864  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:17.701776  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:17.830990  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:17.834110  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:17.889693  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:18.201261  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:18.330427  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:18.332804  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:18.389774  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:18.701315  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:18.835337  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:18.835592  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:18.890206  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:12:19.078805  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:12:19.201707  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:19.330720  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:19.333157  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:19.389920  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:19.701856  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:19.830751  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:19.833009  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:19.889717  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:20.201686  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:20.330681  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:20.333186  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:20.389783  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:20.702164  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:20.832369  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:20.833461  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:20.890882  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 10:12:21.079673  489065 node_ready.go:57] node "addons-463201" has "Ready":"False" status (will retry)
	I1206 10:12:21.201563  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:21.330513  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:21.332954  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:21.457977  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:21.598913  489065 node_ready.go:49] node "addons-463201" is "Ready"
	I1206 10:12:21.598940  489065 node_ready.go:38] duration metric: took 38.022873011s for node "addons-463201" to be "Ready" ...
	I1206 10:12:21.598954  489065 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:12:21.599016  489065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:21.616887  489065 api_server.go:72] duration metric: took 40.999680017s to wait for apiserver process to appear ...
	I1206 10:12:21.616915  489065 api_server.go:88] waiting for apiserver healthz status ...
	I1206 10:12:21.616934  489065 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1206 10:12:21.670507  489065 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1206 10:12:21.677007  489065 api_server.go:141] control plane version: v1.34.2
	I1206 10:12:21.677044  489065 api_server.go:131] duration metric: took 60.121433ms to wait for apiserver health ...
	I1206 10:12:21.677054  489065 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 10:12:21.717457  489065 system_pods.go:59] 19 kube-system pods found
	I1206 10:12:21.717500  489065 system_pods.go:61] "coredns-66bc5c9577-lpwwm" [7c7fd403-5d4d-464f-be21-f2adaba02970] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:12:21.717509  489065 system_pods.go:61] "csi-hostpath-attacher-0" [860f7570-9179-42b1-9516-27cbdd541cba] Pending
	I1206 10:12:21.717517  489065 system_pods.go:61] "csi-hostpath-resizer-0" [d2a6d809-ef2c-42fa-8c97-cce95e85c55e] Pending
	I1206 10:12:21.717521  489065 system_pods.go:61] "csi-hostpathplugin-c44tb" [57f40587-ab37-420e-bdc7-f93eed11c211] Pending
	I1206 10:12:21.717525  489065 system_pods.go:61] "etcd-addons-463201" [cdae2571-6b1e-4bf9-bc32-60fa5176e6a6] Running
	I1206 10:12:21.717528  489065 system_pods.go:61] "kindnet-f7fln" [7fc353f6-b054-4d10-bd16-a8a46177ef2f] Running
	I1206 10:12:21.717532  489065 system_pods.go:61] "kube-apiserver-addons-463201" [7a4ba8c7-b422-4056-a002-4acb74537151] Running
	I1206 10:12:21.717536  489065 system_pods.go:61] "kube-controller-manager-addons-463201" [001b2acc-5d95-4797-8460-891ff2f1a386] Running
	I1206 10:12:21.717539  489065 system_pods.go:61] "kube-ingress-dns-minikube" [365b0274-8935-4681-8e0e-74d4f8960974] Pending
	I1206 10:12:21.717543  489065 system_pods.go:61] "kube-proxy-c7kr8" [68172874-485a-40ab-9e33-a3022f356326] Running
	I1206 10:12:21.717547  489065 system_pods.go:61] "kube-scheduler-addons-463201" [53ca2bb3-ad02-4622-bf46-a2ba0033429f] Running
	I1206 10:12:21.717554  489065 system_pods.go:61] "metrics-server-85b7d694d7-ghlgl" [e2a1839a-8335-4ab7-9136-7a3bb928ad38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 10:12:21.717558  489065 system_pods.go:61] "nvidia-device-plugin-daemonset-wq978" [1859f905-34d7-4140-9bb8-ef61dd8223d4] Pending
	I1206 10:12:21.717562  489065 system_pods.go:61] "registry-6b586f9694-bq87w" [63389ffd-8b62-4e37-9aa3-c3b441f33313] Pending
	I1206 10:12:21.717566  489065 system_pods.go:61] "registry-creds-764b6fb674-d82zs" [58e598f1-3fd2-4d98-a425-39e32abce39a] Pending
	I1206 10:12:21.717574  489065 system_pods.go:61] "registry-proxy-k4pz5" [ea8be87c-e08c-491b-94f0-c370941e4d8e] Pending
	I1206 10:12:21.717578  489065 system_pods.go:61] "snapshot-controller-7d9fbc56b8-b9lfs" [fbec39af-e2ab-44fc-b934-ef0e29d2b42b] Pending
	I1206 10:12:21.717581  489065 system_pods.go:61] "snapshot-controller-7d9fbc56b8-c9xc4" [6836f5df-cdbf-4d10-9f86-e21855d2d435] Pending
	I1206 10:12:21.717585  489065 system_pods.go:61] "storage-provisioner" [34baa5b4-fa81-4ae5-a42a-f3f7cb366a32] Pending
	I1206 10:12:21.717590  489065 system_pods.go:74] duration metric: took 40.531084ms to wait for pod list to return data ...
	I1206 10:12:21.717603  489065 default_sa.go:34] waiting for default service account to be created ...
	I1206 10:12:21.723056  489065 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 10:12:21.723081  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:21.793176  489065 default_sa.go:45] found service account: "default"
	I1206 10:12:21.793205  489065 default_sa.go:55] duration metric: took 75.59451ms for default service account to be created ...
	I1206 10:12:21.793216  489065 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 10:12:21.820378  489065 system_pods.go:86] 19 kube-system pods found
	I1206 10:12:21.820417  489065 system_pods.go:89] "coredns-66bc5c9577-lpwwm" [7c7fd403-5d4d-464f-be21-f2adaba02970] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:12:21.820427  489065 system_pods.go:89] "csi-hostpath-attacher-0" [860f7570-9179-42b1-9516-27cbdd541cba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:12:21.820432  489065 system_pods.go:89] "csi-hostpath-resizer-0" [d2a6d809-ef2c-42fa-8c97-cce95e85c55e] Pending
	I1206 10:12:21.820436  489065 system_pods.go:89] "csi-hostpathplugin-c44tb" [57f40587-ab37-420e-bdc7-f93eed11c211] Pending
	I1206 10:12:21.820442  489065 system_pods.go:89] "etcd-addons-463201" [cdae2571-6b1e-4bf9-bc32-60fa5176e6a6] Running
	I1206 10:12:21.820447  489065 system_pods.go:89] "kindnet-f7fln" [7fc353f6-b054-4d10-bd16-a8a46177ef2f] Running
	I1206 10:12:21.820451  489065 system_pods.go:89] "kube-apiserver-addons-463201" [7a4ba8c7-b422-4056-a002-4acb74537151] Running
	I1206 10:12:21.820455  489065 system_pods.go:89] "kube-controller-manager-addons-463201" [001b2acc-5d95-4797-8460-891ff2f1a386] Running
	I1206 10:12:21.820459  489065 system_pods.go:89] "kube-ingress-dns-minikube" [365b0274-8935-4681-8e0e-74d4f8960974] Pending
	I1206 10:12:21.820469  489065 system_pods.go:89] "kube-proxy-c7kr8" [68172874-485a-40ab-9e33-a3022f356326] Running
	I1206 10:12:21.820473  489065 system_pods.go:89] "kube-scheduler-addons-463201" [53ca2bb3-ad02-4622-bf46-a2ba0033429f] Running
	I1206 10:12:21.820482  489065 system_pods.go:89] "metrics-server-85b7d694d7-ghlgl" [e2a1839a-8335-4ab7-9136-7a3bb928ad38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 10:12:21.820486  489065 system_pods.go:89] "nvidia-device-plugin-daemonset-wq978" [1859f905-34d7-4140-9bb8-ef61dd8223d4] Pending
	I1206 10:12:21.820506  489065 system_pods.go:89] "registry-6b586f9694-bq87w" [63389ffd-8b62-4e37-9aa3-c3b441f33313] Pending
	I1206 10:12:21.820510  489065 system_pods.go:89] "registry-creds-764b6fb674-d82zs" [58e598f1-3fd2-4d98-a425-39e32abce39a] Pending
	I1206 10:12:21.820513  489065 system_pods.go:89] "registry-proxy-k4pz5" [ea8be87c-e08c-491b-94f0-c370941e4d8e] Pending
	I1206 10:12:21.820517  489065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b9lfs" [fbec39af-e2ab-44fc-b934-ef0e29d2b42b] Pending
	I1206 10:12:21.820521  489065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c9xc4" [6836f5df-cdbf-4d10-9f86-e21855d2d435] Pending
	I1206 10:12:21.820527  489065 system_pods.go:89] "storage-provisioner" [34baa5b4-fa81-4ae5-a42a-f3f7cb366a32] Pending
	I1206 10:12:21.820550  489065 retry.go:31] will retry after 261.024902ms: missing components: kube-dns
	I1206 10:12:21.871814  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:21.872147  489065 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 10:12:21.872165  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:21.904284  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:22.102588  489065 system_pods.go:86] 19 kube-system pods found
	I1206 10:12:22.102631  489065 system_pods.go:89] "coredns-66bc5c9577-lpwwm" [7c7fd403-5d4d-464f-be21-f2adaba02970] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:12:22.102641  489065 system_pods.go:89] "csi-hostpath-attacher-0" [860f7570-9179-42b1-9516-27cbdd541cba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:12:22.102648  489065 system_pods.go:89] "csi-hostpath-resizer-0" [d2a6d809-ef2c-42fa-8c97-cce95e85c55e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 10:12:22.102653  489065 system_pods.go:89] "csi-hostpathplugin-c44tb" [57f40587-ab37-420e-bdc7-f93eed11c211] Pending
	I1206 10:12:22.102657  489065 system_pods.go:89] "etcd-addons-463201" [cdae2571-6b1e-4bf9-bc32-60fa5176e6a6] Running
	I1206 10:12:22.102663  489065 system_pods.go:89] "kindnet-f7fln" [7fc353f6-b054-4d10-bd16-a8a46177ef2f] Running
	I1206 10:12:22.102667  489065 system_pods.go:89] "kube-apiserver-addons-463201" [7a4ba8c7-b422-4056-a002-4acb74537151] Running
	I1206 10:12:22.102672  489065 system_pods.go:89] "kube-controller-manager-addons-463201" [001b2acc-5d95-4797-8460-891ff2f1a386] Running
	I1206 10:12:22.102678  489065 system_pods.go:89] "kube-ingress-dns-minikube" [365b0274-8935-4681-8e0e-74d4f8960974] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 10:12:22.102684  489065 system_pods.go:89] "kube-proxy-c7kr8" [68172874-485a-40ab-9e33-a3022f356326] Running
	I1206 10:12:22.102692  489065 system_pods.go:89] "kube-scheduler-addons-463201" [53ca2bb3-ad02-4622-bf46-a2ba0033429f] Running
	I1206 10:12:22.102698  489065 system_pods.go:89] "metrics-server-85b7d694d7-ghlgl" [e2a1839a-8335-4ab7-9136-7a3bb928ad38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 10:12:22.102708  489065 system_pods.go:89] "nvidia-device-plugin-daemonset-wq978" [1859f905-34d7-4140-9bb8-ef61dd8223d4] Pending
	I1206 10:12:22.102715  489065 system_pods.go:89] "registry-6b586f9694-bq87w" [63389ffd-8b62-4e37-9aa3-c3b441f33313] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 10:12:22.102720  489065 system_pods.go:89] "registry-creds-764b6fb674-d82zs" [58e598f1-3fd2-4d98-a425-39e32abce39a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:12:22.102728  489065 system_pods.go:89] "registry-proxy-k4pz5" [ea8be87c-e08c-491b-94f0-c370941e4d8e] Pending
	I1206 10:12:22.102733  489065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b9lfs" [fbec39af-e2ab-44fc-b934-ef0e29d2b42b] Pending
	I1206 10:12:22.102740  489065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c9xc4" [6836f5df-cdbf-4d10-9f86-e21855d2d435] Pending
	I1206 10:12:22.102751  489065 system_pods.go:89] "storage-provisioner" [34baa5b4-fa81-4ae5-a42a-f3f7cb366a32] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:12:22.102767  489065 retry.go:31] will retry after 358.310574ms: missing components: kube-dns
	I1206 10:12:22.220233  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:22.331343  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:22.336692  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:22.401371  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:22.471048  489065 system_pods.go:86] 19 kube-system pods found
	I1206 10:12:22.471197  489065 system_pods.go:89] "coredns-66bc5c9577-lpwwm" [7c7fd403-5d4d-464f-be21-f2adaba02970] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:12:22.471230  489065 system_pods.go:89] "csi-hostpath-attacher-0" [860f7570-9179-42b1-9516-27cbdd541cba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:12:22.471253  489065 system_pods.go:89] "csi-hostpath-resizer-0" [d2a6d809-ef2c-42fa-8c97-cce95e85c55e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 10:12:22.471289  489065 system_pods.go:89] "csi-hostpathplugin-c44tb" [57f40587-ab37-420e-bdc7-f93eed11c211] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 10:12:22.471314  489065 system_pods.go:89] "etcd-addons-463201" [cdae2571-6b1e-4bf9-bc32-60fa5176e6a6] Running
	I1206 10:12:22.471335  489065 system_pods.go:89] "kindnet-f7fln" [7fc353f6-b054-4d10-bd16-a8a46177ef2f] Running
	I1206 10:12:22.471371  489065 system_pods.go:89] "kube-apiserver-addons-463201" [7a4ba8c7-b422-4056-a002-4acb74537151] Running
	I1206 10:12:22.471396  489065 system_pods.go:89] "kube-controller-manager-addons-463201" [001b2acc-5d95-4797-8460-891ff2f1a386] Running
	I1206 10:12:22.471417  489065 system_pods.go:89] "kube-ingress-dns-minikube" [365b0274-8935-4681-8e0e-74d4f8960974] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 10:12:22.471451  489065 system_pods.go:89] "kube-proxy-c7kr8" [68172874-485a-40ab-9e33-a3022f356326] Running
	I1206 10:12:22.471475  489065 system_pods.go:89] "kube-scheduler-addons-463201" [53ca2bb3-ad02-4622-bf46-a2ba0033429f] Running
	I1206 10:12:22.471495  489065 system_pods.go:89] "metrics-server-85b7d694d7-ghlgl" [e2a1839a-8335-4ab7-9136-7a3bb928ad38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 10:12:22.471529  489065 system_pods.go:89] "nvidia-device-plugin-daemonset-wq978" [1859f905-34d7-4140-9bb8-ef61dd8223d4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 10:12:22.471555  489065 system_pods.go:89] "registry-6b586f9694-bq87w" [63389ffd-8b62-4e37-9aa3-c3b441f33313] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 10:12:22.471575  489065 system_pods.go:89] "registry-creds-764b6fb674-d82zs" [58e598f1-3fd2-4d98-a425-39e32abce39a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:12:22.471611  489065 system_pods.go:89] "registry-proxy-k4pz5" [ea8be87c-e08c-491b-94f0-c370941e4d8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 10:12:22.471638  489065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b9lfs" [fbec39af-e2ab-44fc-b934-ef0e29d2b42b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:12:22.471659  489065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c9xc4" [6836f5df-cdbf-4d10-9f86-e21855d2d435] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:12:22.471694  489065 system_pods.go:89] "storage-provisioner" [34baa5b4-fa81-4ae5-a42a-f3f7cb366a32] Running
	I1206 10:12:22.471733  489065 retry.go:31] will retry after 423.87765ms: missing components: kube-dns
	I1206 10:12:22.702646  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:22.831478  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:22.834247  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:22.890228  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:22.900914  489065 system_pods.go:86] 19 kube-system pods found
	I1206 10:12:22.900963  489065 system_pods.go:89] "coredns-66bc5c9577-lpwwm" [7c7fd403-5d4d-464f-be21-f2adaba02970] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:12:22.900973  489065 system_pods.go:89] "csi-hostpath-attacher-0" [860f7570-9179-42b1-9516-27cbdd541cba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:12:22.900981  489065 system_pods.go:89] "csi-hostpath-resizer-0" [d2a6d809-ef2c-42fa-8c97-cce95e85c55e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 10:12:22.900988  489065 system_pods.go:89] "csi-hostpathplugin-c44tb" [57f40587-ab37-420e-bdc7-f93eed11c211] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 10:12:22.900994  489065 system_pods.go:89] "etcd-addons-463201" [cdae2571-6b1e-4bf9-bc32-60fa5176e6a6] Running
	I1206 10:12:22.901000  489065 system_pods.go:89] "kindnet-f7fln" [7fc353f6-b054-4d10-bd16-a8a46177ef2f] Running
	I1206 10:12:22.901005  489065 system_pods.go:89] "kube-apiserver-addons-463201" [7a4ba8c7-b422-4056-a002-4acb74537151] Running
	I1206 10:12:22.901010  489065 system_pods.go:89] "kube-controller-manager-addons-463201" [001b2acc-5d95-4797-8460-891ff2f1a386] Running
	I1206 10:12:22.901019  489065 system_pods.go:89] "kube-ingress-dns-minikube" [365b0274-8935-4681-8e0e-74d4f8960974] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 10:12:22.901024  489065 system_pods.go:89] "kube-proxy-c7kr8" [68172874-485a-40ab-9e33-a3022f356326] Running
	I1206 10:12:22.901036  489065 system_pods.go:89] "kube-scheduler-addons-463201" [53ca2bb3-ad02-4622-bf46-a2ba0033429f] Running
	I1206 10:12:22.901042  489065 system_pods.go:89] "metrics-server-85b7d694d7-ghlgl" [e2a1839a-8335-4ab7-9136-7a3bb928ad38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 10:12:22.901051  489065 system_pods.go:89] "nvidia-device-plugin-daemonset-wq978" [1859f905-34d7-4140-9bb8-ef61dd8223d4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 10:12:22.901065  489065 system_pods.go:89] "registry-6b586f9694-bq87w" [63389ffd-8b62-4e37-9aa3-c3b441f33313] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 10:12:22.901072  489065 system_pods.go:89] "registry-creds-764b6fb674-d82zs" [58e598f1-3fd2-4d98-a425-39e32abce39a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:12:22.901083  489065 system_pods.go:89] "registry-proxy-k4pz5" [ea8be87c-e08c-491b-94f0-c370941e4d8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 10:12:22.901089  489065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b9lfs" [fbec39af-e2ab-44fc-b934-ef0e29d2b42b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:12:22.901096  489065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c9xc4" [6836f5df-cdbf-4d10-9f86-e21855d2d435] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:12:22.901102  489065 system_pods.go:89] "storage-provisioner" [34baa5b4-fa81-4ae5-a42a-f3f7cb366a32] Running
	I1206 10:12:22.901118  489065 retry.go:31] will retry after 550.206772ms: missing components: kube-dns
	I1206 10:12:23.203284  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:23.331972  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:23.334173  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:23.421220  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:23.470037  489065 system_pods.go:86] 19 kube-system pods found
	I1206 10:12:23.470067  489065 system_pods.go:89] "coredns-66bc5c9577-lpwwm" [7c7fd403-5d4d-464f-be21-f2adaba02970] Running
	I1206 10:12:23.470077  489065 system_pods.go:89] "csi-hostpath-attacher-0" [860f7570-9179-42b1-9516-27cbdd541cba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:12:23.470084  489065 system_pods.go:89] "csi-hostpath-resizer-0" [d2a6d809-ef2c-42fa-8c97-cce95e85c55e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 10:12:23.470093  489065 system_pods.go:89] "csi-hostpathplugin-c44tb" [57f40587-ab37-420e-bdc7-f93eed11c211] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 10:12:23.470098  489065 system_pods.go:89] "etcd-addons-463201" [cdae2571-6b1e-4bf9-bc32-60fa5176e6a6] Running
	I1206 10:12:23.470102  489065 system_pods.go:89] "kindnet-f7fln" [7fc353f6-b054-4d10-bd16-a8a46177ef2f] Running
	I1206 10:12:23.470106  489065 system_pods.go:89] "kube-apiserver-addons-463201" [7a4ba8c7-b422-4056-a002-4acb74537151] Running
	I1206 10:12:23.470111  489065 system_pods.go:89] "kube-controller-manager-addons-463201" [001b2acc-5d95-4797-8460-891ff2f1a386] Running
	I1206 10:12:23.470117  489065 system_pods.go:89] "kube-ingress-dns-minikube" [365b0274-8935-4681-8e0e-74d4f8960974] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 10:12:23.470124  489065 system_pods.go:89] "kube-proxy-c7kr8" [68172874-485a-40ab-9e33-a3022f356326] Running
	I1206 10:12:23.470128  489065 system_pods.go:89] "kube-scheduler-addons-463201" [53ca2bb3-ad02-4622-bf46-a2ba0033429f] Running
	I1206 10:12:23.470137  489065 system_pods.go:89] "metrics-server-85b7d694d7-ghlgl" [e2a1839a-8335-4ab7-9136-7a3bb928ad38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 10:12:23.470143  489065 system_pods.go:89] "nvidia-device-plugin-daemonset-wq978" [1859f905-34d7-4140-9bb8-ef61dd8223d4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 10:12:23.470159  489065 system_pods.go:89] "registry-6b586f9694-bq87w" [63389ffd-8b62-4e37-9aa3-c3b441f33313] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 10:12:23.470165  489065 system_pods.go:89] "registry-creds-764b6fb674-d82zs" [58e598f1-3fd2-4d98-a425-39e32abce39a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:12:23.470177  489065 system_pods.go:89] "registry-proxy-k4pz5" [ea8be87c-e08c-491b-94f0-c370941e4d8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 10:12:23.470184  489065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b9lfs" [fbec39af-e2ab-44fc-b934-ef0e29d2b42b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:12:23.470196  489065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c9xc4" [6836f5df-cdbf-4d10-9f86-e21855d2d435] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:12:23.470200  489065 system_pods.go:89] "storage-provisioner" [34baa5b4-fa81-4ae5-a42a-f3f7cb366a32] Running
	I1206 10:12:23.470209  489065 system_pods.go:126] duration metric: took 1.67698694s to wait for k8s-apps to be running ...
	I1206 10:12:23.470219  489065 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 10:12:23.470274  489065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:12:23.497214  489065 system_svc.go:56] duration metric: took 26.984932ms WaitForService to wait for kubelet
	I1206 10:12:23.497243  489065 kubeadm.go:587] duration metric: took 42.880039886s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:12:23.497262  489065 node_conditions.go:102] verifying NodePressure condition ...
	I1206 10:12:23.500003  489065 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1206 10:12:23.500033  489065 node_conditions.go:123] node cpu capacity is 2
	I1206 10:12:23.500047  489065 node_conditions.go:105] duration metric: took 2.77967ms to run NodePressure ...
	I1206 10:12:23.500060  489065 start.go:242] waiting for startup goroutines ...
	I1206 10:12:23.703664  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:23.831412  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:23.834812  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:23.890271  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:24.202077  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:24.332651  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:24.335509  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:24.390697  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:24.702946  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:24.831018  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:24.833854  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:24.889644  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:25.203562  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:25.331624  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:25.334476  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:25.390395  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:25.702839  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:25.834466  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:25.839033  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:25.889969  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:26.203180  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:26.332675  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:26.334000  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:26.389710  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:26.702200  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:26.831582  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:26.834391  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:26.890417  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:27.210095  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:27.333956  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:27.335094  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:27.389867  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:27.702365  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:27.830569  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:27.833019  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:27.889735  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:28.202004  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:28.331523  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:28.333506  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:28.390603  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:28.702489  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:28.830580  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:28.833224  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:28.890218  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:29.203006  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:29.334059  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:29.334314  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:29.390276  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:29.702078  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:29.831905  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:29.833924  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:29.890022  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:30.202815  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:30.331892  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:30.334455  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:30.390266  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:30.701693  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:30.833408  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:30.833958  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:30.889771  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:31.202160  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:31.331891  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:31.333778  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:31.389614  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:31.702424  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:31.830896  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:31.833973  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:31.890702  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:32.202447  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:32.338674  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:32.339168  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:32.390837  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:32.702713  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:32.831493  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:32.834351  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:32.890707  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:33.203816  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:33.331431  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:33.334120  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:33.390096  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:33.702851  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:33.832304  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:33.834706  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:33.890439  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:34.203501  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:34.330987  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:34.333396  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:34.391939  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:34.706792  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:34.830754  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:34.833514  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:34.892120  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:35.224343  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:35.331834  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:35.334309  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:35.390702  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:35.702354  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:35.831078  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:35.833272  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:35.890505  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:36.202318  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:36.332531  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:36.334000  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:36.432838  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:36.703043  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:36.831301  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:36.834113  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:36.890555  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:37.205245  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:37.332244  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:37.334885  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:37.432996  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:37.703602  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:37.831765  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:37.833856  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:37.889738  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:38.204336  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:38.330317  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:38.333879  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:38.390772  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:38.703048  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:38.832589  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:38.834260  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:38.890777  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:39.202567  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:39.330686  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:39.333641  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:39.393196  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:39.702217  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:39.830270  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:39.833960  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:39.890136  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:40.203372  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:40.332323  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:40.334022  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:40.390971  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:40.702829  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:40.831116  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:40.833874  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:40.890167  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:41.202087  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:41.331490  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:41.333580  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:41.390629  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:41.702002  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:41.831358  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:41.833967  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:41.890008  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:42.202743  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:42.330715  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:42.333843  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:42.390715  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:42.702310  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:42.833198  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:42.834644  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:42.891619  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:43.203608  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:43.331840  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:43.335164  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:43.390632  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:43.704113  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:43.832001  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:43.834824  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:43.890579  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:44.203441  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:44.332596  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:44.334646  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:44.389964  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:44.703007  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:44.832588  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:44.834480  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:44.890833  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:45.204537  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:45.331792  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:45.335779  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:45.390443  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:45.702403  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:45.830893  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:45.833297  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:45.890566  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:46.204514  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:46.331284  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:46.334629  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:46.389511  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:46.702438  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:46.830773  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:46.833258  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:46.890168  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:47.202610  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:47.332960  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:47.334153  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:47.390181  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:47.702023  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:47.831594  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:47.834087  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:47.890646  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:48.202225  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:48.330701  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:48.333578  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:48.390180  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:48.703410  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:48.832490  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:48.838714  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:48.890921  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:49.202825  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:49.331603  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:49.334742  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:49.390083  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:49.705413  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:49.832187  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:49.837545  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:49.895603  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:50.214232  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:50.336965  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:50.337295  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:50.433023  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:50.705588  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:50.831406  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:50.837416  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:50.890759  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:51.203022  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:51.331284  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:51.333754  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:51.393236  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:51.710149  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:51.832532  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:51.833667  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:51.890633  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:52.203596  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:52.331051  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:52.333050  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:52.390126  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:52.703485  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:52.831477  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:52.833495  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:52.890715  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:53.202421  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:53.330588  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:53.333533  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:53.390542  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:53.703683  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:53.831330  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:53.833416  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:53.890658  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:54.202245  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:54.333622  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:54.334873  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:54.391077  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:54.702535  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:54.830543  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:54.833128  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:54.890011  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:55.202705  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:55.330828  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:55.333664  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:55.390293  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:55.702228  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:55.830428  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:55.833777  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:55.891291  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:56.201799  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:56.330851  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:56.333282  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:56.390242  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:56.702236  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:56.831609  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:56.834295  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:56.890404  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:57.202046  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:57.331073  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:57.333125  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:57.390222  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:57.701482  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:57.832748  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:57.834593  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:57.891606  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:58.202480  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:58.331496  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:58.333913  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:58.389823  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:58.710675  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:58.831401  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:58.833531  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:58.890454  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:59.202191  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:59.331177  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:59.333708  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:59.389592  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:12:59.701986  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:12:59.831433  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:12:59.834579  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:12:59.891265  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:00.244666  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:00.334977  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:00.337140  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:13:00.390512  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:00.713936  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:00.835053  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:00.835328  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:13:00.890905  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:01.202869  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:01.331454  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:01.333691  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:13:01.390094  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:01.701768  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:01.831198  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:01.833710  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:13:01.889638  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:02.202380  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:02.332397  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:02.334525  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:13:02.390294  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:02.702126  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:02.834668  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:02.838289  489065 kapi.go:107] duration metric: took 1m16.007948253s to wait for kubernetes.io/minikube-addons=registry ...
	I1206 10:13:02.931804  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:03.202686  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:03.330972  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:03.390029  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:03.702893  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:03.830882  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:03.889710  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:04.202399  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:04.330502  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:04.390513  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:04.702870  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:04.831690  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:04.890146  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:05.202058  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:05.331252  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:05.390132  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:05.703495  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:05.831219  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:05.890399  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:06.202234  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:06.331839  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:06.390130  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:13:06.703787  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:06.835650  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:06.891081  489065 kapi.go:107] duration metric: took 1m16.504383293s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1206 10:13:06.895727  489065 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-463201 cluster.
	I1206 10:13:06.898830  489065 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1206 10:13:06.901848  489065 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1206 10:13:07.202130  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:07.331021  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:07.703612  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:07.831186  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:08.203390  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:08.330725  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:08.702966  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:08.830958  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:09.202501  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:09.330353  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:09.702898  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:09.836878  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:10.202258  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:10.330717  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:10.706056  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:10.831972  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:11.202695  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:11.338893  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:11.703410  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:11.830725  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:12.202094  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:12.330690  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:12.709486  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:12.833606  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:13.203250  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:13.337717  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:13.702664  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:13.830900  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:14.203102  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:14.331493  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:14.702981  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:14.831829  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:15.205633  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:15.330560  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:15.702595  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:15.831436  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:16.202006  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:16.330657  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:16.713026  489065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:13:16.831002  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:17.202518  489065 kapi.go:107] duration metric: took 1m30.004099333s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1206 10:13:17.330968  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:17.830978  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:18.330274  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:18.830621  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:19.330828  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:19.831191  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:20.330695  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:20.831933  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:21.330425  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:21.830978  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:22.331795  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:22.831844  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:23.331063  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:23.832006  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:24.330861  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:24.830675  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:25.331336  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:25.831332  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:26.331065  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:26.831352  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:27.331685  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:27.831894  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:28.331209  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:28.830707  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:29.331680  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:29.831903  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:30.331750  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:30.831700  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:31.331139  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:31.831077  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:32.332130  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:32.831693  489065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:13:33.331263  489065 kapi.go:107] duration metric: took 1m46.503942806s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1206 10:13:33.334870  489065 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, ingress-dns, inspektor-gadget, registry-creds, cloud-spanner, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1206 10:13:33.337801  489065 addons.go:530] duration metric: took 1m52.720177597s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin ingress-dns inspektor-gadget registry-creds cloud-spanner storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1206 10:13:33.337866  489065 start.go:247] waiting for cluster config update ...
	I1206 10:13:33.337890  489065 start.go:256] writing updated cluster config ...
	I1206 10:13:33.338210  489065 ssh_runner.go:195] Run: rm -f paused
	I1206 10:13:33.343804  489065 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 10:13:33.347280  489065 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lpwwm" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:33.352431  489065 pod_ready.go:94] pod "coredns-66bc5c9577-lpwwm" is "Ready"
	I1206 10:13:33.352462  489065 pod_ready.go:86] duration metric: took 5.157489ms for pod "coredns-66bc5c9577-lpwwm" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:33.354582  489065 pod_ready.go:83] waiting for pod "etcd-addons-463201" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:33.359037  489065 pod_ready.go:94] pod "etcd-addons-463201" is "Ready"
	I1206 10:13:33.359064  489065 pod_ready.go:86] duration metric: took 4.454092ms for pod "etcd-addons-463201" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:33.361590  489065 pod_ready.go:83] waiting for pod "kube-apiserver-addons-463201" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:33.366289  489065 pod_ready.go:94] pod "kube-apiserver-addons-463201" is "Ready"
	I1206 10:13:33.366316  489065 pod_ready.go:86] duration metric: took 4.697985ms for pod "kube-apiserver-addons-463201" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:33.368942  489065 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-463201" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:33.747791  489065 pod_ready.go:94] pod "kube-controller-manager-addons-463201" is "Ready"
	I1206 10:13:33.747820  489065 pod_ready.go:86] duration metric: took 378.848099ms for pod "kube-controller-manager-addons-463201" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:33.948413  489065 pod_ready.go:83] waiting for pod "kube-proxy-c7kr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:34.347627  489065 pod_ready.go:94] pod "kube-proxy-c7kr8" is "Ready"
	I1206 10:13:34.347660  489065 pod_ready.go:86] duration metric: took 399.220119ms for pod "kube-proxy-c7kr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:34.548201  489065 pod_ready.go:83] waiting for pod "kube-scheduler-addons-463201" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:34.952087  489065 pod_ready.go:94] pod "kube-scheduler-addons-463201" is "Ready"
	I1206 10:13:34.952114  489065 pod_ready.go:86] duration metric: took 403.883929ms for pod "kube-scheduler-addons-463201" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:13:34.952128  489065 pod_ready.go:40] duration metric: took 1.608291584s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 10:13:35.066845  489065 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1206 10:13:35.072221  489065 out.go:179] * Done! kubectl is now configured to use "addons-463201" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 10:14:04 addons-463201 crio[829]: time="2025-12-06T10:14:04.867298201Z" level=info msg="Started container" PID=5333 containerID=d62004d45b9c5dfdf093615dd2d5997971550d8c6e3e045defe1d000efd9f4a8 description=default/test-local-path/busybox id=a84e8d5a-cecc-4722-9ef0-80ac56dcd96a name=/runtime.v1.RuntimeService/StartContainer sandboxID=2d12cd446706bc9e64d4165982afe7ee6a51fd577a4b8e121f1449cd8b54b057
	Dec 06 10:14:06 addons-463201 crio[829]: time="2025-12-06T10:14:06.065568066Z" level=info msg="Stopping pod sandbox: 2d12cd446706bc9e64d4165982afe7ee6a51fd577a4b8e121f1449cd8b54b057" id=a9b7da5d-2f03-4f1f-9886-ddf83b6d7f1f name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 06 10:14:06 addons-463201 crio[829]: time="2025-12-06T10:14:06.065886592Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:2d12cd446706bc9e64d4165982afe7ee6a51fd577a4b8e121f1449cd8b54b057 UID:bef56095-296c-4cc1-ba41-e84bed9e7ba9 NetNS:/var/run/netns/9ba3860d-4614-42ce-b014-6e813efabb20 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078ce0}] Aliases:map[]}"
	Dec 06 10:14:06 addons-463201 crio[829]: time="2025-12-06T10:14:06.066034528Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Dec 06 10:14:06 addons-463201 crio[829]: time="2025-12-06T10:14:06.093467202Z" level=info msg="Stopped pod sandbox: 2d12cd446706bc9e64d4165982afe7ee6a51fd577a4b8e121f1449cd8b54b057" id=a9b7da5d-2f03-4f1f-9886-ddf83b6d7f1f name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 06 10:14:07 addons-463201 crio[829]: time="2025-12-06T10:14:07.861021563Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb/POD" id=a3a99776-6937-4e47-83af-05685245d40a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 10:14:07 addons-463201 crio[829]: time="2025-12-06T10:14:07.861098042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 10:14:07 addons-463201 crio[829]: time="2025-12-06T10:14:07.880177011Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb Namespace:local-path-storage ID:ec91ca9394e32613555edd2b94dbdbb5ea8b0e3d2d51cefc853a9edca0c859af UID:595bc830-1aa6-4774-9d1e-7bb5513f053d NetNS:/var/run/netns/698e6faa-36d9-424b-a975-020a20c565bb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000d16eb0}] Aliases:map[]}"
	Dec 06 10:14:07 addons-463201 crio[829]: time="2025-12-06T10:14:07.880217043Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb to CNI network \"kindnet\" (type=ptp)"
	Dec 06 10:14:07 addons-463201 crio[829]: time="2025-12-06T10:14:07.897241197Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb Namespace:local-path-storage ID:ec91ca9394e32613555edd2b94dbdbb5ea8b0e3d2d51cefc853a9edca0c859af UID:595bc830-1aa6-4774-9d1e-7bb5513f053d NetNS:/var/run/netns/698e6faa-36d9-424b-a975-020a20c565bb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000d16eb0}] Aliases:map[]}"
	Dec 06 10:14:07 addons-463201 crio[829]: time="2025-12-06T10:14:07.897566886Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb for CNI network kindnet (type=ptp)"
	Dec 06 10:14:07 addons-463201 crio[829]: time="2025-12-06T10:14:07.902678614Z" level=info msg="Ran pod sandbox ec91ca9394e32613555edd2b94dbdbb5ea8b0e3d2d51cefc853a9edca0c859af with infra container: local-path-storage/helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb/POD" id=a3a99776-6937-4e47-83af-05685245d40a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 10:14:07 addons-463201 crio[829]: time="2025-12-06T10:14:07.905404785Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=a3c7820d-d5e7-481d-a106-25a62187dedc name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:14:07 addons-463201 crio[829]: time="2025-12-06T10:14:07.911382877Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=bfd529ef-6caf-4417-92f4-7d2e21f4337a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:14:07 addons-463201 crio[829]: time="2025-12-06T10:14:07.91972635Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb/helper-pod" id=5c99c7f6-7f7e-4185-8883-3a9c0f954aae name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 10:14:07 addons-463201 crio[829]: time="2025-12-06T10:14:07.919867344Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 10:14:07 addons-463201 crio[829]: time="2025-12-06T10:14:07.932300393Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 10:14:07 addons-463201 crio[829]: time="2025-12-06T10:14:07.93306783Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 10:14:07 addons-463201 crio[829]: time="2025-12-06T10:14:07.977526693Z" level=info msg="Created container 4de5835ab2a92b0548ee68281f8a059007e082fc702b964bafaa3234d21f2066: local-path-storage/helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb/helper-pod" id=5c99c7f6-7f7e-4185-8883-3a9c0f954aae name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 10:14:07 addons-463201 crio[829]: time="2025-12-06T10:14:07.980033947Z" level=info msg="Starting container: 4de5835ab2a92b0548ee68281f8a059007e082fc702b964bafaa3234d21f2066" id=f7f9994a-240f-4478-b663-a6ccd4d702f5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 10:14:07 addons-463201 crio[829]: time="2025-12-06T10:14:07.984282109Z" level=info msg="Started container" PID=5439 containerID=4de5835ab2a92b0548ee68281f8a059007e082fc702b964bafaa3234d21f2066 description=local-path-storage/helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb/helper-pod id=f7f9994a-240f-4478-b663-a6ccd4d702f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ec91ca9394e32613555edd2b94dbdbb5ea8b0e3d2d51cefc853a9edca0c859af
	Dec 06 10:14:09 addons-463201 crio[829]: time="2025-12-06T10:14:09.086825466Z" level=info msg="Stopping pod sandbox: ec91ca9394e32613555edd2b94dbdbb5ea8b0e3d2d51cefc853a9edca0c859af" id=15e6be67-3ae7-4476-a5e2-e237ea222e83 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 06 10:14:09 addons-463201 crio[829]: time="2025-12-06T10:14:09.087147364Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb Namespace:local-path-storage ID:ec91ca9394e32613555edd2b94dbdbb5ea8b0e3d2d51cefc853a9edca0c859af UID:595bc830-1aa6-4774-9d1e-7bb5513f053d NetNS:/var/run/netns/698e6faa-36d9-424b-a975-020a20c565bb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000d17520}] Aliases:map[]}"
	Dec 06 10:14:09 addons-463201 crio[829]: time="2025-12-06T10:14:09.087294373Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb from CNI network \"kindnet\" (type=ptp)"
	Dec 06 10:14:09 addons-463201 crio[829]: time="2025-12-06T10:14:09.109582137Z" level=info msg="Stopped pod sandbox: ec91ca9394e32613555edd2b94dbdbb5ea8b0e3d2d51cefc853a9edca0c859af" id=15e6be67-3ae7-4476-a5e2-e237ea222e83 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	4de5835ab2a92       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   ec91ca9394e32       helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb   local-path-storage
	d62004d45b9c5       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            4 seconds ago        Exited              busybox                                  0                   2d12cd446706b       test-local-path                                              default
	9bb1bb8348b08       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          31 seconds ago       Running             busybox                                  0                   65df4d7c4fb1b       busybox                                                      default
	7f68815caefeb       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             37 seconds ago       Running             controller                               0                   1d12a9b7b2865       ingress-nginx-controller-85d4c799dd-67nbw                    ingress-nginx
	5e82e51db5bae       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          53 seconds ago       Running             csi-snapshotter                          0                   1b55cb370f6cc       csi-hostpathplugin-c44tb                                     kube-system
	e77ca233e9510       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          54 seconds ago       Running             csi-provisioner                          0                   1b55cb370f6cc       csi-hostpathplugin-c44tb                                     kube-system
	c4503b391c863       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            56 seconds ago       Running             liveness-probe                           0                   1b55cb370f6cc       csi-hostpathplugin-c44tb                                     kube-system
	d9cc152585cdf       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           57 seconds ago       Running             hostpath                                 0                   1b55cb370f6cc       csi-hostpathplugin-c44tb                                     kube-system
	9744230520efe       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                58 seconds ago       Running             node-driver-registrar                    0                   1b55cb370f6cc       csi-hostpathplugin-c44tb                                     kube-system
	144ac149fd224       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            About a minute ago   Running             gadget                                   0                   92f1092a3f8df       gadget-9sgbv                                                 gadget
	eaab868690638       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 About a minute ago   Running             gcp-auth                                 0                   b5ce36d7cdaf6       gcp-auth-78565c9fb4-kwt2c                                    gcp-auth
	3d1cf1b6f7a39       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              patch                                    0                   4b45b7f7de145       ingress-nginx-admission-patch-7snvd                          ingress-nginx
	cb95c710b468f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              create                                   0                   76a5926522969       ingress-nginx-admission-create-4jrk5                         ingress-nginx
	86fe541e9ba32       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   9287e4f872a57       registry-proxy-k4pz5                                         kube-system
	db3f01d09c58d       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   0c44fb6d23380       snapshot-controller-7d9fbc56b8-c9xc4                         kube-system
	4c87ded8b1fe6       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   4b340daaafde7       nvidia-device-plugin-daemonset-wq978                         kube-system
	daa185ec097b3       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   707fe7a3cbff0       snapshot-controller-7d9fbc56b8-b9lfs                         kube-system
	219b695817161       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   7e3333e66ee66       csi-hostpath-resizer-0                                       kube-system
	5d4b33e25d2b5       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   1b55cb370f6cc       csi-hostpathplugin-c44tb                                     kube-system
	f89b62b37376a       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   8946717deff69       metrics-server-85b7d694d7-ghlgl                              kube-system
	5126713beb84f       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               About a minute ago   Running             cloud-spanner-emulator                   0                   fa6cc860efd5c       cloud-spanner-emulator-5bdddb765-9s7q5                       default
	e83032278589e       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   548eede3fea9f       registry-6b586f9694-bq87w                                    kube-system
	d78b8a3ab8327       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   66a372752af25       yakd-dashboard-5ff678cb9-9l52n                               yakd-dashboard
	0f7b25b5f8b12       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   eb4d151ebc6a6       csi-hostpath-attacher-0                                      kube-system
	51c50d8be4bdb       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   fa328292c2f0c       kube-ingress-dns-minikube                                    kube-system
	bf1d5142cb992       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   fabf8129d5e78       local-path-provisioner-648f6765c9-fbm4s                      local-path-storage
	775995b1bde62       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   16fe5abd3f7e9       coredns-66bc5c9577-lpwwm                                     kube-system
	c53340c2393c0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   a2f6d295744d6       storage-provisioner                                          kube-system
	d2c1eed3e4df1       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             2 minutes ago        Running             kube-proxy                               0                   4e73a757c45a4       kube-proxy-c7kr8                                             kube-system
	bb2cff19695f3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   e894bb1e0bd7f       kindnet-f7fln                                                kube-system
	c1f6dd47829ed       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             2 minutes ago        Running             kube-controller-manager                  0                   8a203a58aa564       kube-controller-manager-addons-463201                        kube-system
	0ee8c78f93030       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             2 minutes ago        Running             kube-scheduler                           0                   8e746d9526f4a       kube-scheduler-addons-463201                                 kube-system
	8372b3ca93930       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             2 minutes ago        Running             kube-apiserver                           0                   00138c2a7b72d       kube-apiserver-addons-463201                                 kube-system
	5450d6d68764d       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             2 minutes ago        Running             etcd                                     0                   97639256bb112       etcd-addons-463201                                           kube-system
	
	
	==> coredns [775995b1bde6256f0e91cb2ba08cf0f4b811366397f6c0515af6b9b8aa4bdd06] <==
	[INFO] 10.244.0.9:38321 - 34585 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.004528693s
	[INFO] 10.244.0.9:38321 - 63111 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000291343s
	[INFO] 10.244.0.9:38321 - 41811 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000298465s
	[INFO] 10.244.0.9:55409 - 23495 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000175775s
	[INFO] 10.244.0.9:55409 - 23167 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000229937s
	[INFO] 10.244.0.9:33727 - 20593 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000111768s
	[INFO] 10.244.0.9:33727 - 20413 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000077783s
	[INFO] 10.244.0.9:51831 - 28968 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000095875s
	[INFO] 10.244.0.9:51831 - 28779 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00007885s
	[INFO] 10.244.0.9:41740 - 54034 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001315694s
	[INFO] 10.244.0.9:41740 - 54503 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001403906s
	[INFO] 10.244.0.9:48397 - 5202 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000114s
	[INFO] 10.244.0.9:48397 - 5063 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000214618s
	[INFO] 10.244.0.19:52086 - 38299 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000159448s
	[INFO] 10.244.0.19:51876 - 57895 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000201235s
	[INFO] 10.244.0.19:40805 - 63109 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127218s
	[INFO] 10.244.0.19:35678 - 24384 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115452s
	[INFO] 10.244.0.19:55474 - 1262 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000184899s
	[INFO] 10.244.0.19:51983 - 41203 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000093455s
	[INFO] 10.244.0.19:53100 - 57107 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002144627s
	[INFO] 10.244.0.19:52116 - 58109 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001624785s
	[INFO] 10.244.0.19:35103 - 18677 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001753274s
	[INFO] 10.244.0.19:34432 - 27458 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001563074s
	[INFO] 10.244.0.23:54098 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000203033s
	[INFO] 10.244.0.23:51329 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000129236s
	
	
	==> describe nodes <==
	Name:               addons-463201
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-463201
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=addons-463201
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T10_11_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-463201
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-463201"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 10:11:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-463201
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 10:14:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 10:14:08 +0000   Sat, 06 Dec 2025 10:11:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 10:14:08 +0000   Sat, 06 Dec 2025 10:11:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 10:14:08 +0000   Sat, 06 Dec 2025 10:11:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 10:14:08 +0000   Sat, 06 Dec 2025 10:12:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-463201
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                f3bf18dd-4afd-449b-b566-938b3500b5d7
	  Boot ID:                    e36fa5c9-4dd5-4964-a1e1-f5022a7b372f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  default                     cloud-spanner-emulator-5bdddb765-9s7q5       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  gadget                      gadget-9sgbv                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gcp-auth                    gcp-auth-78565c9fb4-kwt2c                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-67nbw    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m24s
	  kube-system                 coredns-66bc5c9577-lpwwm                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m30s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 csi-hostpathplugin-c44tb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 etcd-addons-463201                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m35s
	  kube-system                 kindnet-f7fln                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m30s
	  kube-system                 kube-apiserver-addons-463201                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-controller-manager-addons-463201        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-c7kr8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-scheduler-addons-463201                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 metrics-server-85b7d694d7-ghlgl              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m25s
	  kube-system                 nvidia-device-plugin-daemonset-wq978         0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 registry-6b586f9694-bq87w                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 registry-creds-764b6fb674-d82zs              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 registry-proxy-k4pz5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 snapshot-controller-7d9fbc56b8-b9lfs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 snapshot-controller-7d9fbc56b8-c9xc4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  local-path-storage          local-path-provisioner-648f6765c9-fbm4s      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-9l52n               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m28s  kube-proxy       
	  Normal   Starting                 2m36s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m36s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m35s  kubelet          Node addons-463201 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s  kubelet          Node addons-463201 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s  kubelet          Node addons-463201 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m31s  node-controller  Node addons-463201 event: Registered Node addons-463201 in Controller
	  Normal   NodeReady                109s   kubelet          Node addons-463201 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 6 08:13] hrtimer: interrupt took 10759856 ns
	[Dec 6 08:20] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000983] FS-Cache: O-cookie d=000000005fa08aa9{9P.session} n=00000000effdd306
	[  +0.001108] FS-Cache: O-key=[10] '34323935383339353739'
	[  +0.000774] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001064] FS-Cache: N-cookie d=000000005fa08aa9{9P.session} n=00000000d1a54e80
	[  +0.001158] FS-Cache: N-key=[10] '34323935383339353739'
	[Dec 6 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.091742] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [5450d6d68764d73b5b2dff2156681b12550ff54b9d5d6ed472c15683bbf31d5e] <==
	{"level":"warn","ts":"2025-12-06T10:11:30.950790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:30.964016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:30.988955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.020817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.033667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.051248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.069284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.100077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.109487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.139468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.158028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.179741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.193191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.211224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.234233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.260494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.280287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.297014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:31.391248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:47.406551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:11:47.428517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:12:09.256224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:12:09.285325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:12:09.302662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:12:09.318090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38162","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [eaab8686906381577db71a1d73b4b44c4288542751b2a3715c084555f3e3b1ef] <==
	2025/12/06 10:13:06 GCP Auth Webhook started!
	2025/12/06 10:13:35 Ready to marshal response ...
	2025/12/06 10:13:35 Ready to write response ...
	2025/12/06 10:13:35 Ready to marshal response ...
	2025/12/06 10:13:35 Ready to write response ...
	2025/12/06 10:13:35 Ready to marshal response ...
	2025/12/06 10:13:35 Ready to write response ...
	2025/12/06 10:13:57 Ready to marshal response ...
	2025/12/06 10:13:57 Ready to write response ...
	2025/12/06 10:13:58 Ready to marshal response ...
	2025/12/06 10:13:58 Ready to write response ...
	2025/12/06 10:13:58 Ready to marshal response ...
	2025/12/06 10:13:58 Ready to write response ...
	2025/12/06 10:14:07 Ready to marshal response ...
	2025/12/06 10:14:07 Ready to write response ...
	2025/12/06 10:14:09 Ready to marshal response ...
	2025/12/06 10:14:09 Ready to write response ...
	
	
	==> kernel <==
	 10:14:10 up  2:56,  0 user,  load average: 1.79, 1.39, 1.94
	Linux addons-463201 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bb2cff19695f37ce069323cfab91760c1fe220c0a3edfc6d40f5233021eafcf3] <==
	I1206 10:12:12.627239       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 10:12:12.627277       1 metrics.go:72] Registering metrics
	I1206 10:12:12.627345       1 controller.go:711] "Syncing nftables rules"
	I1206 10:12:21.129568       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:12:21.129627       1 main.go:301] handling current node
	I1206 10:12:31.127206       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:12:31.127318       1 main.go:301] handling current node
	I1206 10:12:41.126112       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:12:41.126264       1 main.go:301] handling current node
	I1206 10:12:51.127187       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:12:51.127325       1 main.go:301] handling current node
	I1206 10:13:01.125784       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:13:01.125816       1 main.go:301] handling current node
	I1206 10:13:11.126001       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:13:11.126119       1 main.go:301] handling current node
	I1206 10:13:21.129925       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:13:21.129961       1 main.go:301] handling current node
	I1206 10:13:31.133296       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:13:31.133329       1 main.go:301] handling current node
	I1206 10:13:41.126895       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:13:41.127052       1 main.go:301] handling current node
	I1206 10:13:51.126542       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:13:51.126576       1 main.go:301] handling current node
	I1206 10:14:01.126591       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:14:01.126712       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8372b3ca93930cefd069b1589642fa189999760e5a312f2852a05f1c57eef85b] <==
	W1206 10:12:21.430363       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.7.31:443: connect: connection refused
	E1206 10:12:21.430500       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.7.31:443: connect: connection refused" logger="UnhandledError"
	W1206 10:12:21.446396       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.7.31:443: connect: connection refused
	E1206 10:12:21.446437       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.7.31:443: connect: connection refused" logger="UnhandledError"
	W1206 10:12:21.494067       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.7.31:443: connect: connection refused
	E1206 10:12:21.494206       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.7.31:443: connect: connection refused" logger="UnhandledError"
	W1206 10:12:46.487146       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 10:12:46.487201       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1206 10:12:46.487217       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 10:12:46.489295       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 10:12:46.489397       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1206 10:12:46.489419       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1206 10:13:00.724858       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.53.42:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.53.42:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.53.42:443: connect: connection refused" logger="UnhandledError"
	W1206 10:13:00.725451       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 10:13:00.725864       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1206 10:13:00.726469       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.53.42:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.53.42:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.53.42:443: connect: connection refused" logger="UnhandledError"
	I1206 10:13:00.855049       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1206 10:13:45.204107       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47804: use of closed network connection
	E1206 10:13:45.498559       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47826: use of closed network connection
	
	
	==> kube-controller-manager [c1f6dd47829edac6b0e0c655e8eda525208d5a754d69d91b2b59d4a9d1200f84] <==
	I1206 10:11:39.284333       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1206 10:11:39.287581       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1206 10:11:39.287681       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1206 10:11:39.288330       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1206 10:11:39.288843       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1206 10:11:39.288919       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1206 10:11:39.288955       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 10:11:39.297515       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1206 10:11:39.297599       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1206 10:11:39.297631       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1206 10:11:39.297645       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1206 10:11:39.297653       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1206 10:11:39.304998       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 10:11:39.307912       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-463201" podCIDRs=["10.244.0.0/24"]
	E1206 10:11:45.695040       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1206 10:12:09.248579       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1206 10:12:09.248773       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1206 10:12:09.248828       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1206 10:12:09.277253       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1206 10:12:09.291796       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1206 10:12:09.349917       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 10:12:09.393095       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 10:12:24.237702       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1206 10:12:39.354962       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1206 10:12:39.402901       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [d2c1eed3e4df19803bddda45d1cc596ba92381d494b9bef49dc118075e0e83f3] <==
	I1206 10:11:41.451245       1 server_linux.go:53] "Using iptables proxy"
	I1206 10:11:41.544883       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 10:11:41.646047       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 10:11:41.646113       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 10:11:41.646236       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 10:11:41.691386       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 10:11:41.691442       1 server_linux.go:132] "Using iptables Proxier"
	I1206 10:11:41.700661       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 10:11:41.700967       1 server.go:527] "Version info" version="v1.34.2"
	I1206 10:11:41.700980       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 10:11:41.703459       1 config.go:200] "Starting service config controller"
	I1206 10:11:41.703470       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 10:11:41.703489       1 config.go:106] "Starting endpoint slice config controller"
	I1206 10:11:41.703493       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 10:11:41.703507       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 10:11:41.703511       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 10:11:41.704155       1 config.go:309] "Starting node config controller"
	I1206 10:11:41.704162       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 10:11:41.704169       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 10:11:41.804540       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 10:11:41.804604       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 10:11:41.804833       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0ee8c78f93030e10d80b5a240b46a2f842e44c5e7f15b05425a8ff1f45bee309] <==
	E1206 10:11:32.541395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1206 10:11:32.543862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 10:11:32.543984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 10:11:32.544114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 10:11:32.545599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 10:11:32.549291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 10:11:32.550081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 10:11:32.549551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 10:11:32.549595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 10:11:32.549641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 10:11:32.549697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 10:11:32.549740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 10:11:32.549783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 10:11:32.550232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 10:11:32.550283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 10:11:32.550321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 10:11:32.550362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 10:11:32.549497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 10:11:33.364917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 10:11:33.377457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 10:11:33.483227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 10:11:33.550504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 10:11:33.578757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1206 10:11:33.591413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1206 10:11:35.931539       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 10:14:07 addons-463201 kubelet[1291]: I1206 10:14:07.075938    1291 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d12cd446706bc9e64d4165982afe7ee6a51fd577a4b8e121f1449cd8b54b057"
	Dec 06 10:14:07 addons-463201 kubelet[1291]: I1206 10:14:07.624608    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h284w\" (UniqueName: \"kubernetes.io/projected/595bc830-1aa6-4774-9d1e-7bb5513f053d-kube-api-access-h284w\") pod \"helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb\" (UID: \"595bc830-1aa6-4774-9d1e-7bb5513f053d\") " pod="local-path-storage/helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb"
	Dec 06 10:14:07 addons-463201 kubelet[1291]: I1206 10:14:07.625180    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/595bc830-1aa6-4774-9d1e-7bb5513f053d-data\") pod \"helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb\" (UID: \"595bc830-1aa6-4774-9d1e-7bb5513f053d\") " pod="local-path-storage/helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb"
	Dec 06 10:14:07 addons-463201 kubelet[1291]: I1206 10:14:07.625292    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/595bc830-1aa6-4774-9d1e-7bb5513f053d-gcp-creds\") pod \"helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb\" (UID: \"595bc830-1aa6-4774-9d1e-7bb5513f053d\") " pod="local-path-storage/helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb"
	Dec 06 10:14:07 addons-463201 kubelet[1291]: I1206 10:14:07.625384    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/595bc830-1aa6-4774-9d1e-7bb5513f053d-script\") pod \"helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb\" (UID: \"595bc830-1aa6-4774-9d1e-7bb5513f053d\") " pod="local-path-storage/helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb"
	Dec 06 10:14:08 addons-463201 kubelet[1291]: I1206 10:14:08.942217    1291 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bef56095-296c-4cc1-ba41-e84bed9e7ba9" path="/var/lib/kubelet/pods/bef56095-296c-4cc1-ba41-e84bed9e7ba9/volumes"
	Dec 06 10:14:09 addons-463201 kubelet[1291]: I1206 10:14:09.154502    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/595bc830-1aa6-4774-9d1e-7bb5513f053d-data\") pod \"595bc830-1aa6-4774-9d1e-7bb5513f053d\" (UID: \"595bc830-1aa6-4774-9d1e-7bb5513f053d\") "
	Dec 06 10:14:09 addons-463201 kubelet[1291]: I1206 10:14:09.154554    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/595bc830-1aa6-4774-9d1e-7bb5513f053d-gcp-creds\") pod \"595bc830-1aa6-4774-9d1e-7bb5513f053d\" (UID: \"595bc830-1aa6-4774-9d1e-7bb5513f053d\") "
	Dec 06 10:14:09 addons-463201 kubelet[1291]: I1206 10:14:09.154591    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/595bc830-1aa6-4774-9d1e-7bb5513f053d-script\") pod \"595bc830-1aa6-4774-9d1e-7bb5513f053d\" (UID: \"595bc830-1aa6-4774-9d1e-7bb5513f053d\") "
	Dec 06 10:14:09 addons-463201 kubelet[1291]: I1206 10:14:09.154617    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h284w\" (UniqueName: \"kubernetes.io/projected/595bc830-1aa6-4774-9d1e-7bb5513f053d-kube-api-access-h284w\") pod \"595bc830-1aa6-4774-9d1e-7bb5513f053d\" (UID: \"595bc830-1aa6-4774-9d1e-7bb5513f053d\") "
	Dec 06 10:14:09 addons-463201 kubelet[1291]: I1206 10:14:09.155116    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/595bc830-1aa6-4774-9d1e-7bb5513f053d-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "595bc830-1aa6-4774-9d1e-7bb5513f053d" (UID: "595bc830-1aa6-4774-9d1e-7bb5513f053d"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 06 10:14:09 addons-463201 kubelet[1291]: I1206 10:14:09.155224    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/595bc830-1aa6-4774-9d1e-7bb5513f053d-data" (OuterVolumeSpecName: "data") pod "595bc830-1aa6-4774-9d1e-7bb5513f053d" (UID: "595bc830-1aa6-4774-9d1e-7bb5513f053d"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 06 10:14:09 addons-463201 kubelet[1291]: I1206 10:14:09.155506    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595bc830-1aa6-4774-9d1e-7bb5513f053d-script" (OuterVolumeSpecName: "script") pod "595bc830-1aa6-4774-9d1e-7bb5513f053d" (UID: "595bc830-1aa6-4774-9d1e-7bb5513f053d"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Dec 06 10:14:09 addons-463201 kubelet[1291]: I1206 10:14:09.161801    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/595bc830-1aa6-4774-9d1e-7bb5513f053d-kube-api-access-h284w" (OuterVolumeSpecName: "kube-api-access-h284w") pod "595bc830-1aa6-4774-9d1e-7bb5513f053d" (UID: "595bc830-1aa6-4774-9d1e-7bb5513f053d"). InnerVolumeSpecName "kube-api-access-h284w". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 06 10:14:09 addons-463201 kubelet[1291]: I1206 10:14:09.256257    1291 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/595bc830-1aa6-4774-9d1e-7bb5513f053d-data\") on node \"addons-463201\" DevicePath \"\""
	Dec 06 10:14:09 addons-463201 kubelet[1291]: I1206 10:14:09.256898    1291 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/595bc830-1aa6-4774-9d1e-7bb5513f053d-gcp-creds\") on node \"addons-463201\" DevicePath \"\""
	Dec 06 10:14:09 addons-463201 kubelet[1291]: I1206 10:14:09.256956    1291 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/595bc830-1aa6-4774-9d1e-7bb5513f053d-script\") on node \"addons-463201\" DevicePath \"\""
	Dec 06 10:14:09 addons-463201 kubelet[1291]: I1206 10:14:09.256969    1291 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h284w\" (UniqueName: \"kubernetes.io/projected/595bc830-1aa6-4774-9d1e-7bb5513f053d-kube-api-access-h284w\") on node \"addons-463201\" DevicePath \"\""
	Dec 06 10:14:09 addons-463201 kubelet[1291]: I1206 10:14:09.665213    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzzkx\" (UniqueName: \"kubernetes.io/projected/b6249c13-538c-4d22-be4c-45f82a0567d1-kube-api-access-vzzkx\") pod \"task-pv-pod\" (UID: \"b6249c13-538c-4d22-be4c-45f82a0567d1\") " pod="default/task-pv-pod"
	Dec 06 10:14:09 addons-463201 kubelet[1291]: I1206 10:14:09.665358    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e0b99f1b-be19-4059-83ee-50b42a328a7f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^4d7ce827-d28c-11f0-a220-22bc87f09b72\") pod \"task-pv-pod\" (UID: \"b6249c13-538c-4d22-be4c-45f82a0567d1\") " pod="default/task-pv-pod"
	Dec 06 10:14:09 addons-463201 kubelet[1291]: I1206 10:14:09.665405    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b6249c13-538c-4d22-be4c-45f82a0567d1-gcp-creds\") pod \"task-pv-pod\" (UID: \"b6249c13-538c-4d22-be4c-45f82a0567d1\") " pod="default/task-pv-pod"
	Dec 06 10:14:09 addons-463201 kubelet[1291]: I1206 10:14:09.786175    1291 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-e0b99f1b-be19-4059-83ee-50b42a328a7f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^4d7ce827-d28c-11f0-a220-22bc87f09b72\") pod \"task-pv-pod\" (UID: \"b6249c13-538c-4d22-be4c-45f82a0567d1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/cbbc3f6647e433adecfdb84d033e22c6634400dc04b36aebedc0a8933c8267bb/globalmount\"" pod="default/task-pv-pod"
	Dec 06 10:14:09 addons-463201 kubelet[1291]: W1206 10:14:09.925954    1291 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c07cf5a07d38c5c0b61d0eca204384ecbf549b9785b414eca3aabe03152971dd/crio-bcdfa778f63186a7239016182577913c0af38e6473ed0a862d85ea7e3396fd02 WatchSource:0}: Error finding container bcdfa778f63186a7239016182577913c0af38e6473ed0a862d85ea7e3396fd02: Status 404 returned error can't find the container with id bcdfa778f63186a7239016182577913c0af38e6473ed0a862d85ea7e3396fd02
	Dec 06 10:14:10 addons-463201 kubelet[1291]: I1206 10:14:10.095691    1291 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec91ca9394e32613555edd2b94dbdbb5ea8b0e3d2d51cefc853a9edca0c859af"
	Dec 06 10:14:10 addons-463201 kubelet[1291]: E1206 10:14:10.095971    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb\" is forbidden: User \"system:node:addons-463201\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-463201' and this object" podUID="595bc830-1aa6-4774-9d1e-7bb5513f053d" pod="local-path-storage/helper-pod-delete-pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb"
	
	
	==> storage-provisioner [c53340c2393c0b3642954671262ecfd1669b5cf00c3682409e1452a943becd27] <==
	W1206 10:13:44.872313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:13:46.875841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:13:46.880059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:13:48.882908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:13:48.889638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:13:50.892557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:13:50.897521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:13:52.901131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:13:52.907974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:13:54.911001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:13:54.919827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:13:56.925984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:13:56.930850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:13:58.934705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:13:58.943884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:14:00.947487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:14:00.954426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:14:02.965628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:14:02.974982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:14:04.978942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:14:04.987454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:14:06.991077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:14:06.996278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:14:08.999355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:14:09.006463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-463201 -n addons-463201
helpers_test.go:269: (dbg) Run:  kubectl --context addons-463201 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: task-pv-pod ingress-nginx-admission-create-4jrk5 ingress-nginx-admission-patch-7snvd registry-creds-764b6fb674-d82zs
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-463201 describe pod task-pv-pod ingress-nginx-admission-create-4jrk5 ingress-nginx-admission-patch-7snvd registry-creds-764b6fb674-d82zs
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-463201 describe pod task-pv-pod ingress-nginx-admission-create-4jrk5 ingress-nginx-admission-patch-7snvd registry-creds-764b6fb674-d82zs: exit status 1 (129.253394ms)

                                                
                                                
-- stdout --
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-463201/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 10:14:09 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vzzkx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-vzzkx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/task-pv-pod to addons-463201
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4jrk5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7snvd" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-d82zs" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-463201 describe pod task-pv-pod ingress-nginx-admission-create-4jrk5 ingress-nginx-admission-patch-7snvd registry-creds-764b6fb674-d82zs: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-463201 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-463201 addons disable headlamp --alsologtostderr -v=1: exit status 11 (311.837243ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:14:11.853263  496419 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:14:11.854211  496419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:14:11.854254  496419 out.go:374] Setting ErrFile to fd 2...
	I1206 10:14:11.854275  496419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:14:11.854586  496419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:14:11.854932  496419 mustload.go:66] Loading cluster: addons-463201
	I1206 10:14:11.855441  496419 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:14:11.855483  496419 addons.go:622] checking whether the cluster is paused
	I1206 10:14:11.855634  496419 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:14:11.855666  496419 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:14:11.856231  496419 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:14:11.880920  496419 ssh_runner.go:195] Run: systemctl --version
	I1206 10:14:11.880973  496419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:14:11.901596  496419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:14:12.017806  496419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:14:12.017916  496419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:14:12.049622  496419 cri.go:89] found id: "5e82e51db5bae25edf1ad1f508561386faaf99fd749011f29643c994073fa82b"
	I1206 10:14:12.049643  496419 cri.go:89] found id: "e77ca233e95107b4650f789b058a9248eeafd977bfd0e3e94a8d154fb5be3203"
	I1206 10:14:12.049647  496419 cri.go:89] found id: "c4503b391c863e57c177f1e38dbf4df0e7bf25a9bfce9636b16547f1716b56f8"
	I1206 10:14:12.049651  496419 cri.go:89] found id: "d9cc152585cdfdae2e9fa73bb5a04625634fc98fdb53f8478f515311bf505603"
	I1206 10:14:12.049655  496419 cri.go:89] found id: "9744230520efec6c5427d659b020b8aebb0b8f60700b10a90e7568fb4f02feeb"
	I1206 10:14:12.049659  496419 cri.go:89] found id: "86fe541e9ba32ec7aff4e42066e190257fa61374b4e4a034d6e07bd405c91ae1"
	I1206 10:14:12.049662  496419 cri.go:89] found id: "db3f01d09c58d50b8b04347a9153def1aa334e50b6801cc14c91854950ca696a"
	I1206 10:14:12.049665  496419 cri.go:89] found id: "4c87ded8b1fe686fcfd044face4400839c48d991708ed4488d50c44e540be635"
	I1206 10:14:12.049668  496419 cri.go:89] found id: "daa185ec097b31f4e9c67362d59fdb4151fc734d34ff93626ef2bcc21dd41036"
	I1206 10:14:12.049675  496419 cri.go:89] found id: "219b6958171610a9c3b443e6cc8356719046bd89709701802b0a11664a7582b7"
	I1206 10:14:12.049678  496419 cri.go:89] found id: "5d4b33e25d2b558bac3cb545fadd5a94e7ab0be1221ebd35901d30915d5be267"
	I1206 10:14:12.049681  496419 cri.go:89] found id: "f89b62b37376ad66088e24fafa491c343778d9dd1380d9f7fdfdb93b4d59ba53"
	I1206 10:14:12.049684  496419 cri.go:89] found id: "e83032278589ef0f7ecb2c85a3dcc4c6b5b9568e74082e79289f5260ebb38645"
	I1206 10:14:12.049687  496419 cri.go:89] found id: "0f7b25b5f8b12a8b79f60d93bb10fb9ef6cda6c774739cae2d1ce2050af758c1"
	I1206 10:14:12.049690  496419 cri.go:89] found id: "51c50d8be4bdb4c0166e709b97f80c463fbbc2f2f018a946cf77419a29b72cda"
	I1206 10:14:12.049697  496419 cri.go:89] found id: "775995b1bde6256f0e91cb2ba08cf0f4b811366397f6c0515af6b9b8aa4bdd06"
	I1206 10:14:12.049700  496419 cri.go:89] found id: "c53340c2393c0b3642954671262ecfd1669b5cf00c3682409e1452a943becd27"
	I1206 10:14:12.049705  496419 cri.go:89] found id: "d2c1eed3e4df19803bddda45d1cc596ba92381d494b9bef49dc118075e0e83f3"
	I1206 10:14:12.049708  496419 cri.go:89] found id: "bb2cff19695f37ce069323cfab91760c1fe220c0a3edfc6d40f5233021eafcf3"
	I1206 10:14:12.049711  496419 cri.go:89] found id: "c1f6dd47829edac6b0e0c655e8eda525208d5a754d69d91b2b59d4a9d1200f84"
	I1206 10:14:12.049716  496419 cri.go:89] found id: "0ee8c78f93030e10d80b5a240b46a2f842e44c5e7f15b05425a8ff1f45bee309"
	I1206 10:14:12.049719  496419 cri.go:89] found id: "8372b3ca93930cefd069b1589642fa189999760e5a312f2852a05f1c57eef85b"
	I1206 10:14:12.049722  496419 cri.go:89] found id: "5450d6d68764d73b5b2dff2156681b12550ff54b9d5d6ed472c15683bbf31d5e"
	I1206 10:14:12.049725  496419 cri.go:89] found id: ""
	I1206 10:14:12.049775  496419 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:14:12.073713  496419 out.go:203] 
	W1206 10:14:12.076768  496419 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:14:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:14:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:14:12.076885  496419 out.go:285] * 
	* 
	W1206 10:14:12.083868  496419 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:14:12.087562  496419 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-463201 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (4.14s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-9s7q5" [3a0152cc-35b2-46c2-8fd6-686f715f0e8d] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003412223s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-463201 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-463201 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (355.408643ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:14:07.665510  495741 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:14:07.666482  495741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:14:07.666530  495741 out.go:374] Setting ErrFile to fd 2...
	I1206 10:14:07.666554  495741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:14:07.666894  495741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:14:07.667314  495741 mustload.go:66] Loading cluster: addons-463201
	I1206 10:14:07.667748  495741 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:14:07.667787  495741 addons.go:622] checking whether the cluster is paused
	I1206 10:14:07.667914  495741 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:14:07.667946  495741 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:14:07.668582  495741 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:14:07.693272  495741 ssh_runner.go:195] Run: systemctl --version
	I1206 10:14:07.693331  495741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:14:07.722050  495741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:14:07.846423  495741 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:14:07.846516  495741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:14:07.891774  495741 cri.go:89] found id: "5e82e51db5bae25edf1ad1f508561386faaf99fd749011f29643c994073fa82b"
	I1206 10:14:07.891794  495741 cri.go:89] found id: "e77ca233e95107b4650f789b058a9248eeafd977bfd0e3e94a8d154fb5be3203"
	I1206 10:14:07.891798  495741 cri.go:89] found id: "c4503b391c863e57c177f1e38dbf4df0e7bf25a9bfce9636b16547f1716b56f8"
	I1206 10:14:07.891802  495741 cri.go:89] found id: "d9cc152585cdfdae2e9fa73bb5a04625634fc98fdb53f8478f515311bf505603"
	I1206 10:14:07.891805  495741 cri.go:89] found id: "9744230520efec6c5427d659b020b8aebb0b8f60700b10a90e7568fb4f02feeb"
	I1206 10:14:07.891808  495741 cri.go:89] found id: "86fe541e9ba32ec7aff4e42066e190257fa61374b4e4a034d6e07bd405c91ae1"
	I1206 10:14:07.891811  495741 cri.go:89] found id: "db3f01d09c58d50b8b04347a9153def1aa334e50b6801cc14c91854950ca696a"
	I1206 10:14:07.891814  495741 cri.go:89] found id: "4c87ded8b1fe686fcfd044face4400839c48d991708ed4488d50c44e540be635"
	I1206 10:14:07.891817  495741 cri.go:89] found id: "daa185ec097b31f4e9c67362d59fdb4151fc734d34ff93626ef2bcc21dd41036"
	I1206 10:14:07.891823  495741 cri.go:89] found id: "219b6958171610a9c3b443e6cc8356719046bd89709701802b0a11664a7582b7"
	I1206 10:14:07.891826  495741 cri.go:89] found id: "5d4b33e25d2b558bac3cb545fadd5a94e7ab0be1221ebd35901d30915d5be267"
	I1206 10:14:07.891829  495741 cri.go:89] found id: "f89b62b37376ad66088e24fafa491c343778d9dd1380d9f7fdfdb93b4d59ba53"
	I1206 10:14:07.891832  495741 cri.go:89] found id: "e83032278589ef0f7ecb2c85a3dcc4c6b5b9568e74082e79289f5260ebb38645"
	I1206 10:14:07.891835  495741 cri.go:89] found id: "0f7b25b5f8b12a8b79f60d93bb10fb9ef6cda6c774739cae2d1ce2050af758c1"
	I1206 10:14:07.891838  495741 cri.go:89] found id: "51c50d8be4bdb4c0166e709b97f80c463fbbc2f2f018a946cf77419a29b72cda"
	I1206 10:14:07.891842  495741 cri.go:89] found id: "775995b1bde6256f0e91cb2ba08cf0f4b811366397f6c0515af6b9b8aa4bdd06"
	I1206 10:14:07.891845  495741 cri.go:89] found id: "c53340c2393c0b3642954671262ecfd1669b5cf00c3682409e1452a943becd27"
	I1206 10:14:07.891849  495741 cri.go:89] found id: "d2c1eed3e4df19803bddda45d1cc596ba92381d494b9bef49dc118075e0e83f3"
	I1206 10:14:07.891852  495741 cri.go:89] found id: "bb2cff19695f37ce069323cfab91760c1fe220c0a3edfc6d40f5233021eafcf3"
	I1206 10:14:07.891855  495741 cri.go:89] found id: "c1f6dd47829edac6b0e0c655e8eda525208d5a754d69d91b2b59d4a9d1200f84"
	I1206 10:14:07.891859  495741 cri.go:89] found id: "0ee8c78f93030e10d80b5a240b46a2f842e44c5e7f15b05425a8ff1f45bee309"
	I1206 10:14:07.891862  495741 cri.go:89] found id: "8372b3ca93930cefd069b1589642fa189999760e5a312f2852a05f1c57eef85b"
	I1206 10:14:07.891865  495741 cri.go:89] found id: "5450d6d68764d73b5b2dff2156681b12550ff54b9d5d6ed472c15683bbf31d5e"
	I1206 10:14:07.891868  495741 cri.go:89] found id: ""
	I1206 10:14:07.891918  495741 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:14:07.929880  495741 out.go:203] 
	W1206 10:14:07.933423  495741 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:14:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:14:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:14:07.933457  495741 out.go:285] * 
	* 
	W1206 10:14:07.941556  495741 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:14:07.944943  495741 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-463201 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.36s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.56s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-463201 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-463201 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-463201 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-463201 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-463201 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-463201 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-463201 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-463201 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [bef56095-296c-4cc1-ba41-e84bed9e7ba9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [bef56095-296c-4cc1-ba41-e84bed9e7ba9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [bef56095-296c-4cc1-ba41-e84bed9e7ba9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00361785s
addons_test.go:967: (dbg) Run:  kubectl --context addons-463201 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-463201 ssh "cat /opt/local-path-provisioner/pvc-17911fb5-afd9-46c4-b5c6-44f6c74e91bb_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-463201 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-463201 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-463201 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-463201 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (396.637443ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:14:07.676461  495745 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:14:07.677146  495745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:14:07.677160  495745 out.go:374] Setting ErrFile to fd 2...
	I1206 10:14:07.677166  495745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:14:07.677438  495745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:14:07.677735  495745 mustload.go:66] Loading cluster: addons-463201
	I1206 10:14:07.678147  495745 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:14:07.678158  495745 addons.go:622] checking whether the cluster is paused
	I1206 10:14:07.678268  495745 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:14:07.678279  495745 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:14:07.678765  495745 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:14:07.701406  495745 ssh_runner.go:195] Run: systemctl --version
	I1206 10:14:07.701469  495745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:14:07.738229  495745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:14:07.853422  495745 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:14:07.853517  495745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:14:07.938721  495745 cri.go:89] found id: "5e82e51db5bae25edf1ad1f508561386faaf99fd749011f29643c994073fa82b"
	I1206 10:14:07.938743  495745 cri.go:89] found id: "e77ca233e95107b4650f789b058a9248eeafd977bfd0e3e94a8d154fb5be3203"
	I1206 10:14:07.938748  495745 cri.go:89] found id: "c4503b391c863e57c177f1e38dbf4df0e7bf25a9bfce9636b16547f1716b56f8"
	I1206 10:14:07.938752  495745 cri.go:89] found id: "d9cc152585cdfdae2e9fa73bb5a04625634fc98fdb53f8478f515311bf505603"
	I1206 10:14:07.938755  495745 cri.go:89] found id: "9744230520efec6c5427d659b020b8aebb0b8f60700b10a90e7568fb4f02feeb"
	I1206 10:14:07.938759  495745 cri.go:89] found id: "86fe541e9ba32ec7aff4e42066e190257fa61374b4e4a034d6e07bd405c91ae1"
	I1206 10:14:07.938762  495745 cri.go:89] found id: "db3f01d09c58d50b8b04347a9153def1aa334e50b6801cc14c91854950ca696a"
	I1206 10:14:07.938767  495745 cri.go:89] found id: "4c87ded8b1fe686fcfd044face4400839c48d991708ed4488d50c44e540be635"
	I1206 10:14:07.938771  495745 cri.go:89] found id: "daa185ec097b31f4e9c67362d59fdb4151fc734d34ff93626ef2bcc21dd41036"
	I1206 10:14:07.938776  495745 cri.go:89] found id: "219b6958171610a9c3b443e6cc8356719046bd89709701802b0a11664a7582b7"
	I1206 10:14:07.938779  495745 cri.go:89] found id: "5d4b33e25d2b558bac3cb545fadd5a94e7ab0be1221ebd35901d30915d5be267"
	I1206 10:14:07.938787  495745 cri.go:89] found id: "f89b62b37376ad66088e24fafa491c343778d9dd1380d9f7fdfdb93b4d59ba53"
	I1206 10:14:07.938796  495745 cri.go:89] found id: "e83032278589ef0f7ecb2c85a3dcc4c6b5b9568e74082e79289f5260ebb38645"
	I1206 10:14:07.938800  495745 cri.go:89] found id: "0f7b25b5f8b12a8b79f60d93bb10fb9ef6cda6c774739cae2d1ce2050af758c1"
	I1206 10:14:07.938803  495745 cri.go:89] found id: "51c50d8be4bdb4c0166e709b97f80c463fbbc2f2f018a946cf77419a29b72cda"
	I1206 10:14:07.938808  495745 cri.go:89] found id: "775995b1bde6256f0e91cb2ba08cf0f4b811366397f6c0515af6b9b8aa4bdd06"
	I1206 10:14:07.938816  495745 cri.go:89] found id: "c53340c2393c0b3642954671262ecfd1669b5cf00c3682409e1452a943becd27"
	I1206 10:14:07.938822  495745 cri.go:89] found id: "d2c1eed3e4df19803bddda45d1cc596ba92381d494b9bef49dc118075e0e83f3"
	I1206 10:14:07.938825  495745 cri.go:89] found id: "bb2cff19695f37ce069323cfab91760c1fe220c0a3edfc6d40f5233021eafcf3"
	I1206 10:14:07.938829  495745 cri.go:89] found id: "c1f6dd47829edac6b0e0c655e8eda525208d5a754d69d91b2b59d4a9d1200f84"
	I1206 10:14:07.938833  495745 cri.go:89] found id: "0ee8c78f93030e10d80b5a240b46a2f842e44c5e7f15b05425a8ff1f45bee309"
	I1206 10:14:07.938836  495745 cri.go:89] found id: "8372b3ca93930cefd069b1589642fa189999760e5a312f2852a05f1c57eef85b"
	I1206 10:14:07.938851  495745 cri.go:89] found id: "5450d6d68764d73b5b2dff2156681b12550ff54b9d5d6ed472c15683bbf31d5e"
	I1206 10:14:07.938860  495745 cri.go:89] found id: ""
	I1206 10:14:07.938909  495745 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:14:07.968880  495745 out.go:203] 
	W1206 10:14:07.972048  495745 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:14:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:14:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:14:07.972074  495745 out.go:285] * 
	* 
	W1206 10:14:07.978935  495745 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:14:07.985927  495745 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-463201 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.56s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-wq978" [1859f905-34d7-4140-9bb8-ef61dd8223d4] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003312642s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-463201 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-463201 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (259.875204ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:13:58.229341  495315 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:13:58.230072  495315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:13:58.230087  495315 out.go:374] Setting ErrFile to fd 2...
	I1206 10:13:58.230095  495315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:13:58.230409  495315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:13:58.230756  495315 mustload.go:66] Loading cluster: addons-463201
	I1206 10:13:58.231252  495315 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:13:58.231275  495315 addons.go:622] checking whether the cluster is paused
	I1206 10:13:58.231429  495315 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:13:58.231449  495315 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:13:58.232022  495315 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:13:58.249989  495315 ssh_runner.go:195] Run: systemctl --version
	I1206 10:13:58.250043  495315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:13:58.270699  495315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:13:58.377706  495315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:13:58.377795  495315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:13:58.407867  495315 cri.go:89] found id: "5e82e51db5bae25edf1ad1f508561386faaf99fd749011f29643c994073fa82b"
	I1206 10:13:58.407892  495315 cri.go:89] found id: "e77ca233e95107b4650f789b058a9248eeafd977bfd0e3e94a8d154fb5be3203"
	I1206 10:13:58.407897  495315 cri.go:89] found id: "c4503b391c863e57c177f1e38dbf4df0e7bf25a9bfce9636b16547f1716b56f8"
	I1206 10:13:58.407901  495315 cri.go:89] found id: "d9cc152585cdfdae2e9fa73bb5a04625634fc98fdb53f8478f515311bf505603"
	I1206 10:13:58.407905  495315 cri.go:89] found id: "9744230520efec6c5427d659b020b8aebb0b8f60700b10a90e7568fb4f02feeb"
	I1206 10:13:58.407909  495315 cri.go:89] found id: "86fe541e9ba32ec7aff4e42066e190257fa61374b4e4a034d6e07bd405c91ae1"
	I1206 10:13:58.407913  495315 cri.go:89] found id: "db3f01d09c58d50b8b04347a9153def1aa334e50b6801cc14c91854950ca696a"
	I1206 10:13:58.407916  495315 cri.go:89] found id: "4c87ded8b1fe686fcfd044face4400839c48d991708ed4488d50c44e540be635"
	I1206 10:13:58.407919  495315 cri.go:89] found id: "daa185ec097b31f4e9c67362d59fdb4151fc734d34ff93626ef2bcc21dd41036"
	I1206 10:13:58.407928  495315 cri.go:89] found id: "219b6958171610a9c3b443e6cc8356719046bd89709701802b0a11664a7582b7"
	I1206 10:13:58.407932  495315 cri.go:89] found id: "5d4b33e25d2b558bac3cb545fadd5a94e7ab0be1221ebd35901d30915d5be267"
	I1206 10:13:58.407935  495315 cri.go:89] found id: "f89b62b37376ad66088e24fafa491c343778d9dd1380d9f7fdfdb93b4d59ba53"
	I1206 10:13:58.407938  495315 cri.go:89] found id: "e83032278589ef0f7ecb2c85a3dcc4c6b5b9568e74082e79289f5260ebb38645"
	I1206 10:13:58.407941  495315 cri.go:89] found id: "0f7b25b5f8b12a8b79f60d93bb10fb9ef6cda6c774739cae2d1ce2050af758c1"
	I1206 10:13:58.407945  495315 cri.go:89] found id: "51c50d8be4bdb4c0166e709b97f80c463fbbc2f2f018a946cf77419a29b72cda"
	I1206 10:13:58.407952  495315 cri.go:89] found id: "775995b1bde6256f0e91cb2ba08cf0f4b811366397f6c0515af6b9b8aa4bdd06"
	I1206 10:13:58.407958  495315 cri.go:89] found id: "c53340c2393c0b3642954671262ecfd1669b5cf00c3682409e1452a943becd27"
	I1206 10:13:58.407966  495315 cri.go:89] found id: "d2c1eed3e4df19803bddda45d1cc596ba92381d494b9bef49dc118075e0e83f3"
	I1206 10:13:58.407969  495315 cri.go:89] found id: "bb2cff19695f37ce069323cfab91760c1fe220c0a3edfc6d40f5233021eafcf3"
	I1206 10:13:58.407972  495315 cri.go:89] found id: "c1f6dd47829edac6b0e0c655e8eda525208d5a754d69d91b2b59d4a9d1200f84"
	I1206 10:13:58.407987  495315 cri.go:89] found id: "0ee8c78f93030e10d80b5a240b46a2f842e44c5e7f15b05425a8ff1f45bee309"
	I1206 10:13:58.407990  495315 cri.go:89] found id: "8372b3ca93930cefd069b1589642fa189999760e5a312f2852a05f1c57eef85b"
	I1206 10:13:58.407993  495315 cri.go:89] found id: "5450d6d68764d73b5b2dff2156681b12550ff54b9d5d6ed472c15683bbf31d5e"
	I1206 10:13:58.407996  495315 cri.go:89] found id: ""
	I1206 10:13:58.408050  495315 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:13:58.423161  495315 out.go:203] 
	W1206 10:13:58.426135  495315 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:13:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:13:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:13:58.426161  495315 out.go:285] * 
	* 
	W1206 10:13:58.432870  495315 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:13:58.436004  495315 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-463201 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-9l52n" [833e86a3-33bc-4fc8-8c91-f422e63d2b43] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002950384s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-463201 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-463201 addons disable yakd --alsologtostderr -v=1: exit status 11 (271.983534ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:13:51.954468  495226 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:13:51.955256  495226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:13:51.955299  495226 out.go:374] Setting ErrFile to fd 2...
	I1206 10:13:51.955322  495226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:13:51.955603  495226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:13:51.955940  495226 mustload.go:66] Loading cluster: addons-463201
	I1206 10:13:51.956389  495226 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:13:51.956438  495226 addons.go:622] checking whether the cluster is paused
	I1206 10:13:51.956576  495226 config.go:182] Loaded profile config "addons-463201": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:13:51.956617  495226 host.go:66] Checking if "addons-463201" exists ...
	I1206 10:13:51.957216  495226 cli_runner.go:164] Run: docker container inspect addons-463201 --format={{.State.Status}}
	I1206 10:13:51.977934  495226 ssh_runner.go:195] Run: systemctl --version
	I1206 10:13:51.978006  495226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463201
	I1206 10:13:51.995908  495226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/addons-463201/id_rsa Username:docker}
	I1206 10:13:52.106985  495226 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:13:52.107103  495226 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:13:52.144415  495226 cri.go:89] found id: "5e82e51db5bae25edf1ad1f508561386faaf99fd749011f29643c994073fa82b"
	I1206 10:13:52.144438  495226 cri.go:89] found id: "e77ca233e95107b4650f789b058a9248eeafd977bfd0e3e94a8d154fb5be3203"
	I1206 10:13:52.144444  495226 cri.go:89] found id: "c4503b391c863e57c177f1e38dbf4df0e7bf25a9bfce9636b16547f1716b56f8"
	I1206 10:13:52.144448  495226 cri.go:89] found id: "d9cc152585cdfdae2e9fa73bb5a04625634fc98fdb53f8478f515311bf505603"
	I1206 10:13:52.144452  495226 cri.go:89] found id: "9744230520efec6c5427d659b020b8aebb0b8f60700b10a90e7568fb4f02feeb"
	I1206 10:13:52.144462  495226 cri.go:89] found id: "86fe541e9ba32ec7aff4e42066e190257fa61374b4e4a034d6e07bd405c91ae1"
	I1206 10:13:52.144466  495226 cri.go:89] found id: "db3f01d09c58d50b8b04347a9153def1aa334e50b6801cc14c91854950ca696a"
	I1206 10:13:52.144469  495226 cri.go:89] found id: "4c87ded8b1fe686fcfd044face4400839c48d991708ed4488d50c44e540be635"
	I1206 10:13:52.144472  495226 cri.go:89] found id: "daa185ec097b31f4e9c67362d59fdb4151fc734d34ff93626ef2bcc21dd41036"
	I1206 10:13:52.144479  495226 cri.go:89] found id: "219b6958171610a9c3b443e6cc8356719046bd89709701802b0a11664a7582b7"
	I1206 10:13:52.144483  495226 cri.go:89] found id: "5d4b33e25d2b558bac3cb545fadd5a94e7ab0be1221ebd35901d30915d5be267"
	I1206 10:13:52.144486  495226 cri.go:89] found id: "f89b62b37376ad66088e24fafa491c343778d9dd1380d9f7fdfdb93b4d59ba53"
	I1206 10:13:52.144489  495226 cri.go:89] found id: "e83032278589ef0f7ecb2c85a3dcc4c6b5b9568e74082e79289f5260ebb38645"
	I1206 10:13:52.144493  495226 cri.go:89] found id: "0f7b25b5f8b12a8b79f60d93bb10fb9ef6cda6c774739cae2d1ce2050af758c1"
	I1206 10:13:52.144499  495226 cri.go:89] found id: "51c50d8be4bdb4c0166e709b97f80c463fbbc2f2f018a946cf77419a29b72cda"
	I1206 10:13:52.144509  495226 cri.go:89] found id: "775995b1bde6256f0e91cb2ba08cf0f4b811366397f6c0515af6b9b8aa4bdd06"
	I1206 10:13:52.144515  495226 cri.go:89] found id: "c53340c2393c0b3642954671262ecfd1669b5cf00c3682409e1452a943becd27"
	I1206 10:13:52.144522  495226 cri.go:89] found id: "d2c1eed3e4df19803bddda45d1cc596ba92381d494b9bef49dc118075e0e83f3"
	I1206 10:13:52.144525  495226 cri.go:89] found id: "bb2cff19695f37ce069323cfab91760c1fe220c0a3edfc6d40f5233021eafcf3"
	I1206 10:13:52.144528  495226 cri.go:89] found id: "c1f6dd47829edac6b0e0c655e8eda525208d5a754d69d91b2b59d4a9d1200f84"
	I1206 10:13:52.144533  495226 cri.go:89] found id: "0ee8c78f93030e10d80b5a240b46a2f842e44c5e7f15b05425a8ff1f45bee309"
	I1206 10:13:52.144536  495226 cri.go:89] found id: "8372b3ca93930cefd069b1589642fa189999760e5a312f2852a05f1c57eef85b"
	I1206 10:13:52.144539  495226 cri.go:89] found id: "5450d6d68764d73b5b2dff2156681b12550ff54b9d5d6ed472c15683bbf31d5e"
	I1206 10:13:52.144542  495226 cri.go:89] found id: ""
	I1206 10:13:52.144593  495226 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:13:52.159933  495226 out.go:203] 
	W1206 10:13:52.162842  495226 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:13:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:13:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:13:52.162865  495226 out.go:285] * 
	* 
	W1206 10:13:52.169414  495226 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:13:52.172247  495226 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-463201 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (502.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-123579 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1206 10:21:19.800701  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:23:35.939153  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:24:03.643035  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:25:13.259358  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:25:13.265830  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:25:13.277286  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:25:13.298766  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:25:13.340283  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:25:13.421749  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:25:13.583402  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:25:13.905193  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:25:14.547349  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:25:15.828859  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:25:18.390596  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:25:23.512763  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:25:33.754522  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:25:54.236171  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:35.198754  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:27:57.120111  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:28:35.939288  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-123579 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.832108897s)

                                                
                                                
-- stdout --
	* [functional-123579] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-123579" primary control-plane node in "functional-123579" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Found network options:
	  - HTTP_PROXY=localhost:38257
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:38257 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-123579 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-123579 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000879355s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002027198s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002027198s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-123579 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-123579
helpers_test.go:243: (dbg) docker inspect functional-123579:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	        "Created": "2025-12-06T10:21:05.490589445Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 516908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:21:05.573219423Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hostname",
	        "HostsPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hosts",
	        "LogPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721-json.log",
	        "Name": "/functional-123579",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-123579:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-123579",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	                "LowerDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f-init/diff:/var/lib/docker/overlay2/cc06c0f1f442a7275dc247974ca9074508813cfb842de89bc5bb1dae1e824222/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-123579",
	                "Source": "/var/lib/docker/volumes/functional-123579/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-123579",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-123579",
	                "name.minikube.sigs.k8s.io": "functional-123579",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10921d51d4ec866d78853297249318b04ef864639c8e07349985c5733ba03a26",
	            "SandboxKey": "/var/run/docker/netns/10921d51d4ec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-123579": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:5b:29:c4:a4:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa75a7cb7ddfb7086d66f629904d681a84e2c9da78725396c4dc859cfc5aa536",
	                    "EndpointID": "eff9632b5a6c335169f4a61b3c9f1727c30b30183ac61ac9730ddb7b0d19cf24",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-123579",
	                        "86e8d3865f80"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579: exit status 6 (336.476095ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 10:29:21.726415  522083 status.go:458] kubeconfig endpoint: get endpoint: "functional-123579" does not appear in /home/jenkins/minikube-integration/22049-484819/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-137526 ssh sudo umount -f /mount-9p                                                                                                    │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ ssh            │ functional-137526 ssh findmnt -T /mount1                                                                                                          │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ mount          │ -p functional-137526 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2137041211/001:/mount1 --alsologtostderr -v=1                                │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ mount          │ -p functional-137526 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2137041211/001:/mount2 --alsologtostderr -v=1                                │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ mount          │ -p functional-137526 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2137041211/001:/mount3 --alsologtostderr -v=1                                │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ start          │ -p functional-137526 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                         │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ ssh            │ functional-137526 ssh findmnt -T /mount1                                                                                                          │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ start          │ -p functional-137526 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                   │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ start          │ -p functional-137526 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                         │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ ssh            │ functional-137526 ssh findmnt -T /mount2                                                                                                          │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ dashboard      │ --url --port 36195 -p functional-137526 --alsologtostderr -v=1                                                                                    │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ ssh            │ functional-137526 ssh findmnt -T /mount3                                                                                                          │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ mount          │ -p functional-137526 --kill=true                                                                                                                  │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ update-context │ functional-137526 update-context --alsologtostderr -v=2                                                                                           │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ update-context │ functional-137526 update-context --alsologtostderr -v=2                                                                                           │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ update-context │ functional-137526 update-context --alsologtostderr -v=2                                                                                           │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image          │ functional-137526 image ls --format short --alsologtostderr                                                                                       │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image          │ functional-137526 image ls --format yaml --alsologtostderr                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ ssh            │ functional-137526 ssh pgrep buildkitd                                                                                                             │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ image          │ functional-137526 image build -t localhost/my-image:functional-137526 testdata/build --alsologtostderr                                            │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image          │ functional-137526 image ls --format json --alsologtostderr                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image          │ functional-137526 image ls --format table --alsologtostderr                                                                                       │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image          │ functional-137526 image ls                                                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ delete         │ -p functional-137526                                                                                                                              │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:21 UTC │
	│ start          │ -p functional-123579 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:21 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:21:00
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:21:00.585027  516527 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:21:00.585146  516527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:21:00.585150  516527 out.go:374] Setting ErrFile to fd 2...
	I1206 10:21:00.585154  516527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:21:00.585417  516527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:21:00.585801  516527 out.go:368] Setting JSON to false
	I1206 10:21:00.586596  516527 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11012,"bootTime":1765005449,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:21:00.586692  516527 start.go:143] virtualization:  
	I1206 10:21:00.588702  516527 out.go:179] * [functional-123579] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:21:00.589886  516527 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:21:00.590033  516527 notify.go:221] Checking for updates...
	I1206 10:21:00.592376  516527 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:21:00.594334  516527 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:21:00.595472  516527 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:21:00.596572  516527 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:21:00.597702  516527 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:21:00.598980  516527 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:21:00.622405  516527 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:21:00.622517  516527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:21:00.688190  516527 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-06 10:21:00.679368026 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:21:00.688289  516527 docker.go:319] overlay module found
	I1206 10:21:00.689590  516527 out.go:179] * Using the docker driver based on user configuration
	I1206 10:21:00.690765  516527 start.go:309] selected driver: docker
	I1206 10:21:00.690774  516527 start.go:927] validating driver "docker" against <nil>
	I1206 10:21:00.690785  516527 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:21:00.691569  516527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:21:00.749340  516527 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-06 10:21:00.740441146 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:21:00.749479  516527 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 10:21:00.749693  516527 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:21:00.750996  516527 out.go:179] * Using Docker driver with root privileges
	I1206 10:21:00.752210  516527 cni.go:84] Creating CNI manager for ""
	I1206 10:21:00.752274  516527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:21:00.752281  516527 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 10:21:00.752367  516527 start.go:353] cluster config:
	{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:21:00.753724  516527 out.go:179] * Starting "functional-123579" primary control-plane node in "functional-123579" cluster
	I1206 10:21:00.754640  516527 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:21:00.755877  516527 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:21:00.757084  516527 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:21:00.757129  516527 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:21:00.757136  516527 cache.go:65] Caching tarball of preloaded images
	I1206 10:21:00.757160  516527 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:21:00.757222  516527 preload.go:238] Found /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:21:00.757231  516527 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:21:00.757582  516527 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/config.json ...
	I1206 10:21:00.757601  516527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/config.json: {Name:mkf54be6328b4b828358ee2cae36f55bf9a75b53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:21:00.776651  516527 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:21:00.776662  516527 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:21:00.776682  516527 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:21:00.776714  516527 start.go:360] acquireMachinesLock for functional-123579: {Name:mk35a9adf20f50a3c49b774a4ee092917f16cc66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:21:00.776814  516527 start.go:364] duration metric: took 86.645µs to acquireMachinesLock for "functional-123579"
	I1206 10:21:00.776838  516527 start.go:93] Provisioning new machine with config: &{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:21:00.776919  516527 start.go:125] createHost starting for "" (driver="docker")
	I1206 10:21:00.778341  516527 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1206 10:21:00.778592  516527 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:38257 to docker env.
	I1206 10:21:00.778615  516527 start.go:159] libmachine.API.Create for "functional-123579" (driver="docker")
	I1206 10:21:00.778635  516527 client.go:173] LocalClient.Create starting
	I1206 10:21:00.778706  516527 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem
	I1206 10:21:00.778742  516527 main.go:143] libmachine: Decoding PEM data...
	I1206 10:21:00.778758  516527 main.go:143] libmachine: Parsing certificate...
	I1206 10:21:00.778825  516527 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem
	I1206 10:21:00.778841  516527 main.go:143] libmachine: Decoding PEM data...
	I1206 10:21:00.778851  516527 main.go:143] libmachine: Parsing certificate...
	I1206 10:21:00.779623  516527 cli_runner.go:164] Run: docker network inspect functional-123579 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 10:21:00.796174  516527 cli_runner.go:211] docker network inspect functional-123579 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 10:21:00.796253  516527 network_create.go:284] running [docker network inspect functional-123579] to gather additional debugging logs...
	I1206 10:21:00.796268  516527 cli_runner.go:164] Run: docker network inspect functional-123579
	W1206 10:21:00.812567  516527 cli_runner.go:211] docker network inspect functional-123579 returned with exit code 1
	I1206 10:21:00.812587  516527 network_create.go:287] error running [docker network inspect functional-123579]: docker network inspect functional-123579: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-123579 not found
	I1206 10:21:00.812600  516527 network_create.go:289] output of [docker network inspect functional-123579]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-123579 not found
	
	** /stderr **
	I1206 10:21:00.812700  516527 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:21:00.830073  516527 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400196d140}
	I1206 10:21:00.830104  516527 network_create.go:124] attempt to create docker network functional-123579 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1206 10:21:00.830162  516527 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-123579 functional-123579
	I1206 10:21:00.890613  516527 network_create.go:108] docker network functional-123579 192.168.49.0/24 created
	I1206 10:21:00.890635  516527 kic.go:121] calculated static IP "192.168.49.2" for the "functional-123579" container
	I1206 10:21:00.890720  516527 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 10:21:00.907282  516527 cli_runner.go:164] Run: docker volume create functional-123579 --label name.minikube.sigs.k8s.io=functional-123579 --label created_by.minikube.sigs.k8s.io=true
	I1206 10:21:00.923705  516527 oci.go:103] Successfully created a docker volume functional-123579
	I1206 10:21:00.923792  516527 cli_runner.go:164] Run: docker run --rm --name functional-123579-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-123579 --entrypoint /usr/bin/test -v functional-123579:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 10:21:01.493282  516527 oci.go:107] Successfully prepared a docker volume functional-123579
	I1206 10:21:01.493340  516527 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:21:01.493348  516527 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 10:21:01.493437  516527 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-123579:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 10:21:05.416985  516527 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-123579:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.923513752s)
	I1206 10:21:05.417009  516527 kic.go:203] duration metric: took 3.923657339s to extract preloaded images to volume ...
	W1206 10:21:05.417152  516527 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 10:21:05.417261  516527 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 10:21:05.473997  516527 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-123579 --name functional-123579 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-123579 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-123579 --network functional-123579 --ip 192.168.49.2 --volume functional-123579:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 10:21:05.786245  516527 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Running}}
	I1206 10:21:05.806587  516527 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:21:05.827257  516527 cli_runner.go:164] Run: docker exec functional-123579 stat /var/lib/dpkg/alternatives/iptables
	I1206 10:21:05.896991  516527 oci.go:144] the created container "functional-123579" has a running status.
	I1206 10:21:05.897011  516527 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa...
	I1206 10:21:06.242159  516527 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 10:21:06.269079  516527 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:21:06.299181  516527 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 10:21:06.299192  516527 kic_runner.go:114] Args: [docker exec --privileged functional-123579 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 10:21:06.351986  516527 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:21:06.376028  516527 machine.go:94] provisionDockerMachine start ...
	I1206 10:21:06.376123  516527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:21:06.397845  516527 main.go:143] libmachine: Using SSH client type: native
	I1206 10:21:06.398203  516527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:21:06.398209  516527 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:21:06.398779  516527 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34248->127.0.0.1:33183: read: connection reset by peer
	I1206 10:21:09.550796  516527 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:21:09.550812  516527 ubuntu.go:182] provisioning hostname "functional-123579"
	I1206 10:21:09.550877  516527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:21:09.568102  516527 main.go:143] libmachine: Using SSH client type: native
	I1206 10:21:09.568409  516527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:21:09.568417  516527 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-123579 && echo "functional-123579" | sudo tee /etc/hostname
	I1206 10:21:09.728593  516527 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:21:09.728674  516527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:21:09.746056  516527 main.go:143] libmachine: Using SSH client type: native
	I1206 10:21:09.746372  516527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:21:09.746386  516527 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-123579' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-123579/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-123579' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:21:09.899491  516527 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:21:09.899506  516527 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-484819/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-484819/.minikube}
	I1206 10:21:09.899525  516527 ubuntu.go:190] setting up certificates
	I1206 10:21:09.899535  516527 provision.go:84] configureAuth start
	I1206 10:21:09.899598  516527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:21:09.916915  516527 provision.go:143] copyHostCerts
	I1206 10:21:09.916985  516527 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem, removing ...
	I1206 10:21:09.916994  516527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 10:21:09.917075  516527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem (1123 bytes)
	I1206 10:21:09.917165  516527 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem, removing ...
	I1206 10:21:09.917169  516527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 10:21:09.917194  516527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem (1675 bytes)
	I1206 10:21:09.917244  516527 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem, removing ...
	I1206 10:21:09.917247  516527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 10:21:09.917269  516527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem (1082 bytes)
	I1206 10:21:09.917312  516527 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem org=jenkins.functional-123579 san=[127.0.0.1 192.168.49.2 functional-123579 localhost minikube]
	I1206 10:21:09.962918  516527 provision.go:177] copyRemoteCerts
	I1206 10:21:09.962970  516527 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:21:09.963008  516527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:21:09.980203  516527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:21:10.102525  516527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:21:10.121556  516527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:21:10.141157  516527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:21:10.159928  516527 provision.go:87] duration metric: took 260.362608ms to configureAuth
	I1206 10:21:10.159946  516527 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:21:10.160140  516527 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:21:10.160286  516527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:21:10.178042  516527 main.go:143] libmachine: Using SSH client type: native
	I1206 10:21:10.178359  516527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:21:10.178371  516527 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:21:10.475140  516527 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:21:10.475154  516527 machine.go:97] duration metric: took 4.099114315s to provisionDockerMachine
	I1206 10:21:10.475163  516527 client.go:176] duration metric: took 9.696524292s to LocalClient.Create
	I1206 10:21:10.475176  516527 start.go:167] duration metric: took 9.696561903s to libmachine.API.Create "functional-123579"
	I1206 10:21:10.475199  516527 start.go:293] postStartSetup for "functional-123579" (driver="docker")
	I1206 10:21:10.475210  516527 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:21:10.475272  516527 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:21:10.475312  516527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:21:10.492888  516527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:21:10.599446  516527 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:21:10.602762  516527 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:21:10.602780  516527 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:21:10.602790  516527 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/addons for local assets ...
	I1206 10:21:10.602846  516527 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/files for local assets ...
	I1206 10:21:10.602932  516527 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> 4880682.pem in /etc/ssl/certs
	I1206 10:21:10.603009  516527 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts -> hosts in /etc/test/nested/copy/488068
	I1206 10:21:10.603070  516527 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488068
	I1206 10:21:10.610771  516527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:21:10.628379  516527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts --> /etc/test/nested/copy/488068/hosts (40 bytes)
	I1206 10:21:10.647880  516527 start.go:296] duration metric: took 172.667062ms for postStartSetup
	I1206 10:21:10.648259  516527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:21:10.665564  516527 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/config.json ...
	I1206 10:21:10.665831  516527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:21:10.665871  516527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:21:10.683160  516527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:21:10.788540  516527 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:21:10.793295  516527 start.go:128] duration metric: took 10.016361733s to createHost
	I1206 10:21:10.793310  516527 start.go:83] releasing machines lock for "functional-123579", held for 10.01648886s
	I1206 10:21:10.793379  516527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:21:10.814043  516527 out.go:179] * Found network options:
	I1206 10:21:10.817773  516527 out.go:179]   - HTTP_PROXY=localhost:38257
	W1206 10:21:10.820809  516527 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1206 10:21:10.823739  516527 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1206 10:21:10.826607  516527 ssh_runner.go:195] Run: cat /version.json
	I1206 10:21:10.826650  516527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:21:10.826656  516527 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:21:10.826712  516527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:21:10.847401  516527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:21:10.849394  516527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:21:11.055398  516527 ssh_runner.go:195] Run: systemctl --version
	I1206 10:21:11.063053  516527 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:21:11.100136  516527 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:21:11.104601  516527 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:21:11.104669  516527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:21:11.133277  516527 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 10:21:11.133291  516527 start.go:496] detecting cgroup driver to use...
	I1206 10:21:11.133324  516527 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:21:11.133381  516527 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:21:11.151600  516527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:21:11.165162  516527 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:21:11.165217  516527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:21:11.183542  516527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:21:11.203729  516527 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:21:11.327022  516527 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:21:11.449520  516527 docker.go:234] disabling docker service ...
	I1206 10:21:11.449581  516527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:21:11.471406  516527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:21:11.484663  516527 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:21:11.610486  516527 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:21:11.733909  516527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:21:11.746776  516527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:21:11.762237  516527 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:21:11.762294  516527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:21:11.771223  516527 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:21:11.771295  516527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:21:11.781141  516527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:21:11.790406  516527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:21:11.799492  516527 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:21:11.808040  516527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:21:11.818310  516527 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:21:11.832771  516527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:21:11.841576  516527 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:21:11.849415  516527 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:21:11.857026  516527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:21:11.972194  516527 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:21:12.174070  516527 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:21:12.174153  516527 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:21:12.178018  516527 start.go:564] Will wait 60s for crictl version
	I1206 10:21:12.178075  516527 ssh_runner.go:195] Run: which crictl
	I1206 10:21:12.181581  516527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:21:12.211208  516527 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:21:12.211296  516527 ssh_runner.go:195] Run: crio --version
	I1206 10:21:12.243322  516527 ssh_runner.go:195] Run: crio --version
	I1206 10:21:12.276055  516527 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:21:12.278947  516527 cli_runner.go:164] Run: docker network inspect functional-123579 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:21:12.295072  516527 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:21:12.299080  516527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:21:12.308803  516527 kubeadm.go:884] updating cluster {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:21:12.308917  516527 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:21:12.308967  516527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:21:12.342180  516527 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:21:12.342193  516527 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:21:12.342248  516527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:21:12.367677  516527 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:21:12.367690  516527 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:21:12.367699  516527 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:21:12.367796  516527 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-123579 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:21:12.367893  516527 ssh_runner.go:195] Run: crio config
	I1206 10:21:12.438054  516527 cni.go:84] Creating CNI manager for ""
	I1206 10:21:12.438065  516527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:21:12.438078  516527 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:21:12.438099  516527 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-123579 NodeName:functional-123579 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:21:12.438215  516527 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-123579"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:21:12.438289  516527 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:21:12.446157  516527 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:21:12.446227  516527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:21:12.453806  516527 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:21:12.466489  516527 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:21:12.480229  516527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1206 10:21:12.493749  516527 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:21:12.497565  516527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:21:12.507918  516527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:21:12.628262  516527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:21:12.645982  516527 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579 for IP: 192.168.49.2
	I1206 10:21:12.645993  516527 certs.go:195] generating shared ca certs ...
	I1206 10:21:12.646008  516527 certs.go:227] acquiring lock for ca certs: {Name:mk654f77abd8383620ce6ddae56f2a6a8c1d96d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:21:12.646140  516527 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key
	I1206 10:21:12.646183  516527 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key
	I1206 10:21:12.646188  516527 certs.go:257] generating profile certs ...
	I1206 10:21:12.646241  516527 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key
	I1206 10:21:12.646250  516527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt with IP's: []
	I1206 10:21:12.952495  516527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt ...
	I1206 10:21:12.952512  516527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: {Name:mk3a9b841c9add9844b05599d7ce20de5b214c1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:21:12.952722  516527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key ...
	I1206 10:21:12.952729  516527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key: {Name:mk8624eaf33fc82e3480ab063ea6f9be26335f36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:21:12.952830  516527 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key.fda7c087
	I1206 10:21:12.952841  516527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt.fda7c087 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1206 10:21:13.004842  516527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt.fda7c087 ...
	I1206 10:21:13.004856  516527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt.fda7c087: {Name:mk320f035df1035eb9571e298e957a7ed7e5ef3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:21:13.005049  516527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key.fda7c087 ...
	I1206 10:21:13.005058  516527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key.fda7c087: {Name:mkf5986664b0adcf7432ec93ccb0b9399544e489 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:21:13.005131  516527 certs.go:382] copying /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt.fda7c087 -> /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt
	I1206 10:21:13.005212  516527 certs.go:386] copying /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key.fda7c087 -> /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key
	I1206 10:21:13.005264  516527 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key
	I1206 10:21:13.005277  516527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.crt with IP's: []
	I1206 10:21:13.265293  516527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.crt ...
	I1206 10:21:13.265310  516527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.crt: {Name:mkd04fbcfc23e6fad1bd2ea0ba85824b8a52a32a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:21:13.265505  516527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key ...
	I1206 10:21:13.265513  516527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key: {Name:mkb5784fa0c8b17f4827714699d6c2f402f9a68f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:21:13.265715  516527 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem (1338 bytes)
	W1206 10:21:13.265757  516527 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068_empty.pem, impossibly tiny 0 bytes
	I1206 10:21:13.265765  516527 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 10:21:13.265791  516527 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:21:13.265815  516527 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:21:13.265838  516527 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem (1675 bytes)
	I1206 10:21:13.265881  516527 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:21:13.266416  516527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:21:13.286242  516527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:21:13.305129  516527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:21:13.323836  516527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 10:21:13.341305  516527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:21:13.358497  516527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:21:13.376007  516527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:21:13.393967  516527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:21:13.411769  516527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem --> /usr/share/ca-certificates/488068.pem (1338 bytes)
	I1206 10:21:13.431785  516527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /usr/share/ca-certificates/4880682.pem (1708 bytes)
	I1206 10:21:13.451837  516527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:21:13.470637  516527 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:21:13.485271  516527 ssh_runner.go:195] Run: openssl version
	I1206 10:21:13.491715  516527 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488068.pem
	I1206 10:21:13.498764  516527 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488068.pem /etc/ssl/certs/488068.pem
	I1206 10:21:13.506186  516527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488068.pem
	I1206 10:21:13.510164  516527 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 10:21:13.510226  516527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488068.pem
	I1206 10:21:13.551217  516527 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:21:13.558639  516527 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/488068.pem /etc/ssl/certs/51391683.0
	I1206 10:21:13.566113  516527 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4880682.pem
	I1206 10:21:13.573591  516527 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4880682.pem /etc/ssl/certs/4880682.pem
	I1206 10:21:13.581388  516527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4880682.pem
	I1206 10:21:13.585110  516527 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 10:21:13.585177  516527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4880682.pem
	I1206 10:21:13.626721  516527 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:21:13.634235  516527 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4880682.pem /etc/ssl/certs/3ec20f2e.0
	I1206 10:21:13.641403  516527 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:21:13.648605  516527 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:21:13.656054  516527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:21:13.659970  516527 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:21:13.660030  516527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:21:13.701092  516527 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:21:13.709052  516527 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 10:21:13.716733  516527 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:21:13.720473  516527 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 10:21:13.720517  516527 kubeadm.go:401] StartCluster: {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:21:13.720588  516527 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:21:13.720648  516527 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:21:13.746750  516527 cri.go:89] found id: ""
	I1206 10:21:13.746808  516527 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:21:13.754456  516527 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:21:13.762441  516527 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:21:13.762499  516527 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:21:13.770582  516527 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:21:13.770591  516527 kubeadm.go:158] found existing configuration files:
	
	I1206 10:21:13.770647  516527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:21:13.778774  516527 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:21:13.778838  516527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:21:13.786452  516527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:21:13.794558  516527 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:21:13.794624  516527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:21:13.802122  516527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:21:13.810273  516527 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:21:13.810339  516527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:21:13.818810  516527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:21:13.826592  516527 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:21:13.826658  516527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:21:13.834481  516527 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:21:13.874854  516527 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:21:13.874966  516527 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:21:13.945433  516527 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:21:13.945497  516527 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:21:13.945560  516527 kubeadm.go:319] OS: Linux
	I1206 10:21:13.945625  516527 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:21:13.945683  516527 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:21:13.945738  516527 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:21:13.945797  516527 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:21:13.945854  516527 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:21:13.945902  516527 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:21:13.945962  516527 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:21:13.946022  516527 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:21:13.946076  516527 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:21:14.018989  516527 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:21:14.019149  516527 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:21:14.019262  516527 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:21:14.031563  516527 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:21:14.037843  516527 out.go:252]   - Generating certificates and keys ...
	I1206 10:21:14.037944  516527 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:21:14.038031  516527 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:21:14.159004  516527 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 10:21:14.648252  516527 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 10:21:15.218692  516527 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 10:21:15.315814  516527 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 10:21:15.549227  516527 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 10:21:15.549523  516527 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-123579 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 10:21:15.902685  516527 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 10:21:15.903031  516527 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-123579 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 10:21:16.150311  516527 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 10:21:16.391081  516527 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 10:21:16.613176  516527 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 10:21:16.613404  516527 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:21:16.908880  516527 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:21:17.174334  516527 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:21:17.587053  516527 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:21:17.834984  516527 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:21:18.063032  516527 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:21:18.063662  516527 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:21:18.066435  516527 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:21:18.070191  516527 out.go:252]   - Booting up control plane ...
	I1206 10:21:18.070289  516527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:21:18.070366  516527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:21:18.070431  516527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:21:18.087018  516527 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:21:18.087160  516527 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:21:18.094739  516527 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:21:18.095026  516527 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:21:18.095384  516527 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:21:18.228484  516527 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:21:18.228597  516527 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:25:18.229028  516527 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000879355s
	I1206 10:25:18.229048  516527 kubeadm.go:319] 
	I1206 10:25:18.229104  516527 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:25:18.229136  516527 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:25:18.229247  516527 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:25:18.229250  516527 kubeadm.go:319] 
	I1206 10:25:18.229354  516527 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:25:18.229384  516527 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:25:18.229414  516527 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:25:18.229417  516527 kubeadm.go:319] 
	I1206 10:25:18.235859  516527 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:25:18.236394  516527 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:25:18.236547  516527 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:25:18.236827  516527 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:25:18.236833  516527 kubeadm.go:319] 
	I1206 10:25:18.236934  516527 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:25:18.237087  516527 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-123579 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-123579 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000879355s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:25:18.237206  516527 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:25:18.648253  516527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:25:18.661193  516527 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:25:18.661250  516527 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:25:18.669172  516527 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:25:18.669180  516527 kubeadm.go:158] found existing configuration files:
	
	I1206 10:25:18.669230  516527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:25:18.676990  516527 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:25:18.677050  516527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:25:18.684747  516527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:25:18.692819  516527 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:25:18.692877  516527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:25:18.700419  516527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:25:18.708337  516527 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:25:18.708400  516527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:25:18.716095  516527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:25:18.723826  516527 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:25:18.723894  516527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:25:18.731375  516527 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:25:18.839291  516527 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:25:18.839742  516527 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:25:18.907903  516527 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:29:20.935739  516527 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1206 10:29:20.935762  516527 kubeadm.go:319] 
	I1206 10:29:20.935835  516527 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:29:20.940560  516527 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:29:20.940625  516527 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:29:20.940717  516527 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:29:20.940785  516527 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:29:20.940825  516527 kubeadm.go:319] OS: Linux
	I1206 10:29:20.940871  516527 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:29:20.940935  516527 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:29:20.940990  516527 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:29:20.941040  516527 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:29:20.941111  516527 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:29:20.941163  516527 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:29:20.941210  516527 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:29:20.941261  516527 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:29:20.941318  516527 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:29:20.941395  516527 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:29:20.941496  516527 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:29:20.941594  516527 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:29:20.941660  516527 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:29:20.944673  516527 out.go:252]   - Generating certificates and keys ...
	I1206 10:29:20.944765  516527 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:29:20.944828  516527 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:29:20.944935  516527 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:29:20.945017  516527 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:29:20.945113  516527 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:29:20.945179  516527 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:29:20.945254  516527 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:29:20.945333  516527 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:29:20.945413  516527 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:29:20.945486  516527 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:29:20.945523  516527 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:29:20.945578  516527 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:29:20.945628  516527 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:29:20.945685  516527 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:29:20.945737  516527 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:29:20.945800  516527 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:29:20.945854  516527 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:29:20.945938  516527 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:29:20.946004  516527 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:29:20.948773  516527 out.go:252]   - Booting up control plane ...
	I1206 10:29:20.948883  516527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:29:20.948967  516527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:29:20.949033  516527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:29:20.949137  516527 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:29:20.949233  516527 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:29:20.949337  516527 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:29:20.949421  516527 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:29:20.949460  516527 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:29:20.949590  516527 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:29:20.949694  516527 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:29:20.949759  516527 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.002027198s
	I1206 10:29:20.949761  516527 kubeadm.go:319] 
	I1206 10:29:20.949817  516527 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:29:20.949849  516527 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:29:20.949953  516527 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:29:20.949955  516527 kubeadm.go:319] 
	I1206 10:29:20.950060  516527 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:29:20.950092  516527 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:29:20.950122  516527 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:29:20.950188  516527 kubeadm.go:403] duration metric: took 8m7.229675199s to StartCluster
	I1206 10:29:20.950219  516527 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:29:20.950279  516527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:29:20.950370  516527 kubeadm.go:319] 
	I1206 10:29:20.981680  516527 cri.go:89] found id: ""
	I1206 10:29:20.981699  516527 logs.go:282] 0 containers: []
	W1206 10:29:20.981706  516527 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:29:20.981711  516527 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:29:20.981777  516527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:29:21.009916  516527 cri.go:89] found id: ""
	I1206 10:29:21.009930  516527 logs.go:282] 0 containers: []
	W1206 10:29:21.009937  516527 logs.go:284] No container was found matching "etcd"
	I1206 10:29:21.009943  516527 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:29:21.010008  516527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:29:21.035914  516527 cri.go:89] found id: ""
	I1206 10:29:21.035928  516527 logs.go:282] 0 containers: []
	W1206 10:29:21.035935  516527 logs.go:284] No container was found matching "coredns"
	I1206 10:29:21.035941  516527 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:29:21.036001  516527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:29:21.061361  516527 cri.go:89] found id: ""
	I1206 10:29:21.061382  516527 logs.go:282] 0 containers: []
	W1206 10:29:21.061389  516527 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:29:21.061394  516527 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:29:21.061452  516527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:29:21.090196  516527 cri.go:89] found id: ""
	I1206 10:29:21.090209  516527 logs.go:282] 0 containers: []
	W1206 10:29:21.090216  516527 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:29:21.090225  516527 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:29:21.090283  516527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:29:21.117601  516527 cri.go:89] found id: ""
	I1206 10:29:21.117615  516527 logs.go:282] 0 containers: []
	W1206 10:29:21.117622  516527 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:29:21.117627  516527 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:29:21.117682  516527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:29:21.141896  516527 cri.go:89] found id: ""
	I1206 10:29:21.141909  516527 logs.go:282] 0 containers: []
	W1206 10:29:21.141916  516527 logs.go:284] No container was found matching "kindnet"
	I1206 10:29:21.141925  516527 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:29:21.141935  516527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:29:21.209413  516527 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:29:21.200655    4849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:29:21.201375    4849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:29:21.203110    4849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:29:21.203782    4849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:29:21.205476    4849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:29:21.200655    4849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:29:21.201375    4849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:29:21.203110    4849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:29:21.203782    4849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:29:21.205476    4849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:29:21.209424  516527 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:29:21.209435  516527 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:29:21.241084  516527 logs.go:123] Gathering logs for container status ...
	I1206 10:29:21.241104  516527 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:29:21.268987  516527 logs.go:123] Gathering logs for kubelet ...
	I1206 10:29:21.269002  516527 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:29:21.335339  516527 logs.go:123] Gathering logs for dmesg ...
	I1206 10:29:21.335359  516527 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 10:29:21.350480  516527 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002027198s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:29:21.350536  516527 out.go:285] * 
	W1206 10:29:21.350640  516527 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002027198s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:29:21.350690  516527 out.go:285] * 
	W1206 10:29:21.353178  516527 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:29:21.359364  516527 out.go:203] 
	W1206 10:29:21.362230  516527 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002027198s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:29:21.362305  516527 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:29:21.362378  516527 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:29:21.365496  516527 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:21:12 functional-123579 crio[842]: time="2025-12-06T10:21:12.168167153Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 06 10:21:12 functional-123579 crio[842]: time="2025-12-06T10:21:12.168328365Z" level=info msg="Starting seccomp notifier watcher"
	Dec 06 10:21:12 functional-123579 crio[842]: time="2025-12-06T10:21:12.168385479Z" level=info msg="Create NRI interface"
	Dec 06 10:21:12 functional-123579 crio[842]: time="2025-12-06T10:21:12.168537599Z" level=info msg="built-in NRI default validator is disabled"
	Dec 06 10:21:12 functional-123579 crio[842]: time="2025-12-06T10:21:12.1685535Z" level=info msg="runtime interface created"
	Dec 06 10:21:12 functional-123579 crio[842]: time="2025-12-06T10:21:12.168567687Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 06 10:21:12 functional-123579 crio[842]: time="2025-12-06T10:21:12.168574554Z" level=info msg="runtime interface starting up..."
	Dec 06 10:21:12 functional-123579 crio[842]: time="2025-12-06T10:21:12.168581143Z" level=info msg="starting plugins..."
	Dec 06 10:21:12 functional-123579 crio[842]: time="2025-12-06T10:21:12.168596117Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:21:12 functional-123579 crio[842]: time="2025-12-06T10:21:12.16865781Z" level=info msg="No systemd watchdog enabled"
	Dec 06 10:21:12 functional-123579 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 06 10:21:14 functional-123579 crio[842]: time="2025-12-06T10:21:14.023054029Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=d34590ff-7322-4bcd-bd45-2ab429e5ae29 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:21:14 functional-123579 crio[842]: time="2025-12-06T10:21:14.023963598Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=81eb278d-1a56-4e5e-9a28-1302edb6d5c3 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:21:14 functional-123579 crio[842]: time="2025-12-06T10:21:14.024626756Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=47da1ec3-7621-4058-8c2f-7ae603431998 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:21:14 functional-123579 crio[842]: time="2025-12-06T10:21:14.025176858Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=d57f67e0-db8d-4495-a1e2-83a4c1a70b62 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:21:14 functional-123579 crio[842]: time="2025-12-06T10:21:14.025724128Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=eabaf582-39cd-4250-8a11-3f162a5d455e name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:21:14 functional-123579 crio[842]: time="2025-12-06T10:21:14.026277455Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=8963136d-5e15-41d4-9dc2-14f94b88b419 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:21:14 functional-123579 crio[842]: time="2025-12-06T10:21:14.026786622Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=815372c5-8ab1-4d21-9fa3-1b1ab1551886 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:25:18 functional-123579 crio[842]: time="2025-12-06T10:25:18.910958855Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=9a6c4cdb-d2f0-444c-8b9b-3240d0bb00bc name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:25:18 functional-123579 crio[842]: time="2025-12-06T10:25:18.911697415Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=b41fc276-5954-457d-a984-a212888dcaf6 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:25:18 functional-123579 crio[842]: time="2025-12-06T10:25:18.912218208Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=accea91d-9dbf-473c-b324-4bb8661b60a0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:25:18 functional-123579 crio[842]: time="2025-12-06T10:25:18.912687121Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=21f1a6ef-553c-48c2-92f4-b5e23260795d name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:25:18 functional-123579 crio[842]: time="2025-12-06T10:25:18.913188969Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=e2e8cba2-e648-4d96-a893-a2cca4e42b24 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:25:18 functional-123579 crio[842]: time="2025-12-06T10:25:18.913648881Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=00459455-f933-442e-8a24-9ed9fe1d84e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:25:18 functional-123579 crio[842]: time="2025-12-06T10:25:18.914090899Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=3ec519f6-aac9-4719-9d4d-faff7077c87a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:29:22.351564    4972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:29:22.352190    4972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:29:22.353829    4972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:29:22.354185    4972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:29:22.355464    4972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:20] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000983] FS-Cache: O-cookie d=000000005fa08aa9{9P.session} n=00000000effdd306
	[  +0.001108] FS-Cache: O-key=[10] '34323935383339353739'
	[  +0.000774] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001064] FS-Cache: N-cookie d=000000005fa08aa9{9P.session} n=00000000d1a54e80
	[  +0.001158] FS-Cache: N-key=[10] '34323935383339353739'
	[Dec 6 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.091742] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:17] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:18] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:29:22 up  3:11,  0 user,  load average: 0.23, 0.46, 1.13
	Linux functional-123579 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:29:19 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:29:20 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 646.
	Dec 06 10:29:20 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:29:20 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:29:20 functional-123579 kubelet[4779]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:29:20 functional-123579 kubelet[4779]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:29:20 functional-123579 kubelet[4779]: E1206 10:29:20.232207    4779 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:29:20 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:29:20 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:29:20 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Dec 06 10:29:20 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:29:20 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:29:20 functional-123579 kubelet[4785]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:29:20 functional-123579 kubelet[4785]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:29:20 functional-123579 kubelet[4785]: E1206 10:29:20.992856    4785 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:29:20 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:29:20 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:29:21 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 648.
	Dec 06 10:29:21 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:29:21 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:29:21 functional-123579 kubelet[4886]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:29:21 functional-123579 kubelet[4886]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:29:21 functional-123579 kubelet[4886]: E1206 10:29:21.719930    4886 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:29:21 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:29:21 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579: exit status 6 (333.04042ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 10:29:22.810112  522302 status.go:458] kubeconfig endpoint: get endpoint: "functional-123579" does not appear in /home/jenkins/minikube-integration/22049-484819/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-123579" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (502.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1206 10:29:22.825199  488068 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-123579 --alsologtostderr -v=8
E1206 10:30:13.255340  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:30:40.961919  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:33:35.937628  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:34:59.004531  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:35:13.255266  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-123579 --alsologtostderr -v=8: exit status 80 (6m6.125418487s)

                                                
                                                
-- stdout --
	* [functional-123579] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-123579" primary control-plane node in "functional-123579" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:29:22.870980  522370 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:29:22.871170  522370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:22.871181  522370 out.go:374] Setting ErrFile to fd 2...
	I1206 10:29:22.871187  522370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:22.871464  522370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:29:22.871865  522370 out.go:368] Setting JSON to false
	I1206 10:29:22.872761  522370 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11514,"bootTime":1765005449,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:29:22.872829  522370 start.go:143] virtualization:  
	I1206 10:29:22.876360  522370 out.go:179] * [functional-123579] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:29:22.880135  522370 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:29:22.880243  522370 notify.go:221] Checking for updates...
	I1206 10:29:22.885979  522370 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:29:22.888900  522370 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:22.891673  522370 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:29:22.894419  522370 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:29:22.897199  522370 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:29:22.900505  522370 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:29:22.900663  522370 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:29:22.930035  522370 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:29:22.930154  522370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:29:22.994169  522370 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:29:22.985097483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:29:22.994270  522370 docker.go:319] overlay module found
	I1206 10:29:22.997336  522370 out.go:179] * Using the docker driver based on existing profile
	I1206 10:29:23.000134  522370 start.go:309] selected driver: docker
	I1206 10:29:23.000177  522370 start.go:927] validating driver "docker" against &{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:29:23.000290  522370 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:29:23.000407  522370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:29:23.064912  522370 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:29:23.055716934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:29:23.065339  522370 cni.go:84] Creating CNI manager for ""
	I1206 10:29:23.065406  522370 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:29:23.065455  522370 start.go:353] cluster config:
	{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:29:23.068684  522370 out.go:179] * Starting "functional-123579" primary control-plane node in "functional-123579" cluster
	I1206 10:29:23.071544  522370 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:29:23.074549  522370 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:29:23.077588  522370 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:29:23.077638  522370 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:29:23.077648  522370 cache.go:65] Caching tarball of preloaded images
	I1206 10:29:23.077715  522370 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:29:23.077742  522370 preload.go:238] Found /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:29:23.077753  522370 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:29:23.077861  522370 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/config.json ...
	I1206 10:29:23.100973  522370 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:29:23.100996  522370 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:29:23.101011  522370 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:29:23.101047  522370 start.go:360] acquireMachinesLock for functional-123579: {Name:mk35a9adf20f50a3c49b774a4ee092917f16cc66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:29:23.101106  522370 start.go:364] duration metric: took 36.569µs to acquireMachinesLock for "functional-123579"
	I1206 10:29:23.101131  522370 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:29:23.101140  522370 fix.go:54] fixHost starting: 
	I1206 10:29:23.101403  522370 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:29:23.120661  522370 fix.go:112] recreateIfNeeded on functional-123579: state=Running err=<nil>
	W1206 10:29:23.120697  522370 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:29:23.124123  522370 out.go:252] * Updating the running docker "functional-123579" container ...
	I1206 10:29:23.124169  522370 machine.go:94] provisionDockerMachine start ...
	I1206 10:29:23.124278  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:23.148209  522370 main.go:143] libmachine: Using SSH client type: native
	I1206 10:29:23.148655  522370 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:29:23.148670  522370 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:29:23.311217  522370 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:29:23.311246  522370 ubuntu.go:182] provisioning hostname "functional-123579"
	I1206 10:29:23.311337  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:23.330615  522370 main.go:143] libmachine: Using SSH client type: native
	I1206 10:29:23.330948  522370 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:29:23.330967  522370 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-123579 && echo "functional-123579" | sudo tee /etc/hostname
	I1206 10:29:23.492326  522370 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:29:23.492442  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:23.511425  522370 main.go:143] libmachine: Using SSH client type: native
	I1206 10:29:23.511745  522370 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:29:23.511767  522370 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-123579' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-123579/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-123579' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:29:23.663802  522370 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:29:23.663828  522370 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-484819/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-484819/.minikube}
	I1206 10:29:23.663852  522370 ubuntu.go:190] setting up certificates
	I1206 10:29:23.663862  522370 provision.go:84] configureAuth start
	I1206 10:29:23.663938  522370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:29:23.683626  522370 provision.go:143] copyHostCerts
	I1206 10:29:23.683677  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 10:29:23.683720  522370 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem, removing ...
	I1206 10:29:23.683732  522370 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 10:29:23.683811  522370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem (1082 bytes)
	I1206 10:29:23.683905  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 10:29:23.683927  522370 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem, removing ...
	I1206 10:29:23.683935  522370 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 10:29:23.683965  522370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem (1123 bytes)
	I1206 10:29:23.684012  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 10:29:23.684032  522370 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem, removing ...
	I1206 10:29:23.684040  522370 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 10:29:23.684065  522370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem (1675 bytes)
	I1206 10:29:23.684117  522370 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem org=jenkins.functional-123579 san=[127.0.0.1 192.168.49.2 functional-123579 localhost minikube]
	I1206 10:29:23.851072  522370 provision.go:177] copyRemoteCerts
	I1206 10:29:23.851167  522370 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:29:23.851208  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:23.869258  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:23.976487  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 10:29:23.976551  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:29:23.994935  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 10:29:23.995001  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:29:24.028988  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 10:29:24.029065  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:29:24.047435  522370 provision.go:87] duration metric: took 383.548866ms to configureAuth
	I1206 10:29:24.047460  522370 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:29:24.047651  522370 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:29:24.047753  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.065906  522370 main.go:143] libmachine: Using SSH client type: native
	I1206 10:29:24.066279  522370 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:29:24.066304  522370 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:29:24.394899  522370 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:29:24.394922  522370 machine.go:97] duration metric: took 1.270744832s to provisionDockerMachine
	I1206 10:29:24.394933  522370 start.go:293] postStartSetup for "functional-123579" (driver="docker")
	I1206 10:29:24.394946  522370 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:29:24.395040  522370 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:29:24.395089  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.413037  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:24.518950  522370 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:29:24.522167  522370 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1206 10:29:24.522190  522370 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1206 10:29:24.522196  522370 command_runner.go:130] > VERSION_ID="12"
	I1206 10:29:24.522201  522370 command_runner.go:130] > VERSION="12 (bookworm)"
	I1206 10:29:24.522206  522370 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1206 10:29:24.522219  522370 command_runner.go:130] > ID=debian
	I1206 10:29:24.522224  522370 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1206 10:29:24.522228  522370 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1206 10:29:24.522234  522370 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1206 10:29:24.522273  522370 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:29:24.522296  522370 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:29:24.522307  522370 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/addons for local assets ...
	I1206 10:29:24.522366  522370 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/files for local assets ...
	I1206 10:29:24.522448  522370 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> 4880682.pem in /etc/ssl/certs
	I1206 10:29:24.522465  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> /etc/ssl/certs/4880682.pem
	I1206 10:29:24.522539  522370 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts -> hosts in /etc/test/nested/copy/488068
	I1206 10:29:24.522547  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts -> /etc/test/nested/copy/488068/hosts
	I1206 10:29:24.522590  522370 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488068
	I1206 10:29:24.529941  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:29:24.547406  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts --> /etc/test/nested/copy/488068/hosts (40 bytes)
	I1206 10:29:24.564885  522370 start.go:296] duration metric: took 169.937214ms for postStartSetup
	I1206 10:29:24.565009  522370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:29:24.565071  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.582051  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:24.684564  522370 command_runner.go:130] > 18%
	I1206 10:29:24.685308  522370 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:29:24.690194  522370 command_runner.go:130] > 161G
	I1206 10:29:24.690863  522370 fix.go:56] duration metric: took 1.589719046s for fixHost
	I1206 10:29:24.690882  522370 start.go:83] releasing machines lock for "functional-123579", held for 1.589762361s
	I1206 10:29:24.690959  522370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:29:24.710139  522370 ssh_runner.go:195] Run: cat /version.json
	I1206 10:29:24.710198  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.710437  522370 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:29:24.710491  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.744752  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:24.750995  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:24.850618  522370 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1206 10:29:24.850833  522370 ssh_runner.go:195] Run: systemctl --version
	I1206 10:29:24.941044  522370 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 10:29:24.943691  522370 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1206 10:29:24.943731  522370 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 10:29:24.943796  522370 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:29:24.982406  522370 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 10:29:24.986710  522370 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 10:29:24.986856  522370 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:29:24.986921  522370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:29:24.995206  522370 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:29:24.995230  522370 start.go:496] detecting cgroup driver to use...
	I1206 10:29:24.995260  522370 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:29:24.995314  522370 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:29:25.015488  522370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:29:25.029388  522370 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:29:25.029474  522370 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:29:25.044588  522370 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:29:25.057886  522370 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:29:25.175907  522370 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:29:25.297406  522370 docker.go:234] disabling docker service ...
	I1206 10:29:25.297502  522370 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:29:25.313940  522370 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:29:25.326948  522370 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:29:25.448237  522370 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:29:25.592886  522370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:29:25.605716  522370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:29:25.618765  522370 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1206 10:29:25.620045  522370 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:29:25.620120  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.628683  522370 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:29:25.628808  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.637855  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.646676  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.656251  522370 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:29:25.664395  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.673385  522370 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.681859  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.691317  522370 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:29:25.697883  522370 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 10:29:25.698954  522370 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:29:25.706470  522370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:29:25.835287  522370 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:29:25.994073  522370 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:29:25.994183  522370 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:29:25.998083  522370 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1206 10:29:25.998204  522370 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 10:29:25.998238  522370 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1206 10:29:25.998335  522370 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:29:25.998358  522370 command_runner.go:130] > Access: 2025-12-06 10:29:25.948140155 +0000
	I1206 10:29:25.998390  522370 command_runner.go:130] > Modify: 2025-12-06 10:29:25.948140155 +0000
	I1206 10:29:25.998420  522370 command_runner.go:130] > Change: 2025-12-06 10:29:25.948140155 +0000
	I1206 10:29:25.998437  522370 command_runner.go:130] >  Birth: -
	I1206 10:29:25.998473  522370 start.go:564] Will wait 60s for crictl version
	I1206 10:29:25.998553  522370 ssh_runner.go:195] Run: which crictl
	I1206 10:29:26.004847  522370 command_runner.go:130] > /usr/local/bin/crictl
	I1206 10:29:26.004981  522370 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:29:26.037391  522370 command_runner.go:130] > Version:  0.1.0
	I1206 10:29:26.037414  522370 command_runner.go:130] > RuntimeName:  cri-o
	I1206 10:29:26.037421  522370 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1206 10:29:26.037427  522370 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 10:29:26.037438  522370 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:29:26.037548  522370 ssh_runner.go:195] Run: crio --version
	I1206 10:29:26.065733  522370 command_runner.go:130] > crio version 1.34.3
	I1206 10:29:26.065769  522370 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1206 10:29:26.065793  522370 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1206 10:29:26.065805  522370 command_runner.go:130] >    GitTreeState:   dirty
	I1206 10:29:26.065811  522370 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1206 10:29:26.065822  522370 command_runner.go:130] >    GoVersion:      go1.24.6
	I1206 10:29:26.065827  522370 command_runner.go:130] >    Compiler:       gc
	I1206 10:29:26.065832  522370 command_runner.go:130] >    Platform:       linux/arm64
	I1206 10:29:26.065840  522370 command_runner.go:130] >    Linkmode:       static
	I1206 10:29:26.065845  522370 command_runner.go:130] >    BuildTags:
	I1206 10:29:26.065852  522370 command_runner.go:130] >      static
	I1206 10:29:26.065886  522370 command_runner.go:130] >      netgo
	I1206 10:29:26.065897  522370 command_runner.go:130] >      osusergo
	I1206 10:29:26.065918  522370 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1206 10:29:26.065928  522370 command_runner.go:130] >      seccomp
	I1206 10:29:26.065932  522370 command_runner.go:130] >      apparmor
	I1206 10:29:26.065941  522370 command_runner.go:130] >      selinux
	I1206 10:29:26.065946  522370 command_runner.go:130] >    LDFlags:          unknown
	I1206 10:29:26.065954  522370 command_runner.go:130] >    SeccompEnabled:   true
	I1206 10:29:26.065958  522370 command_runner.go:130] >    AppArmorEnabled:  false
	I1206 10:29:26.068082  522370 ssh_runner.go:195] Run: crio --version
	I1206 10:29:26.095375  522370 command_runner.go:130] > crio version 1.34.3
	I1206 10:29:26.095453  522370 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1206 10:29:26.095474  522370 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1206 10:29:26.095491  522370 command_runner.go:130] >    GitTreeState:   dirty
	I1206 10:29:26.095522  522370 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1206 10:29:26.095561  522370 command_runner.go:130] >    GoVersion:      go1.24.6
	I1206 10:29:26.095582  522370 command_runner.go:130] >    Compiler:       gc
	I1206 10:29:26.095622  522370 command_runner.go:130] >    Platform:       linux/arm64
	I1206 10:29:26.095651  522370 command_runner.go:130] >    Linkmode:       static
	I1206 10:29:26.095669  522370 command_runner.go:130] >    BuildTags:
	I1206 10:29:26.095698  522370 command_runner.go:130] >      static
	I1206 10:29:26.095717  522370 command_runner.go:130] >      netgo
	I1206 10:29:26.095735  522370 command_runner.go:130] >      osusergo
	I1206 10:29:26.095756  522370 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1206 10:29:26.095787  522370 command_runner.go:130] >      seccomp
	I1206 10:29:26.095810  522370 command_runner.go:130] >      apparmor
	I1206 10:29:26.095867  522370 command_runner.go:130] >      selinux
	I1206 10:29:26.095888  522370 command_runner.go:130] >    LDFlags:          unknown
	I1206 10:29:26.095910  522370 command_runner.go:130] >    SeccompEnabled:   true
	I1206 10:29:26.095930  522370 command_runner.go:130] >    AppArmorEnabled:  false
	I1206 10:29:26.103062  522370 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:29:26.105990  522370 cli_runner.go:164] Run: docker network inspect functional-123579 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:29:26.122102  522370 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:29:26.125939  522370 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1206 10:29:26.126304  522370 kubeadm.go:884] updating cluster {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:29:26.126416  522370 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:29:26.126475  522370 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:29:26.161627  522370 command_runner.go:130] > {
	I1206 10:29:26.161646  522370 command_runner.go:130] >   "images":  [
	I1206 10:29:26.161650  522370 command_runner.go:130] >     {
	I1206 10:29:26.161662  522370 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:29:26.161666  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161672  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:29:26.161676  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161681  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161689  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1206 10:29:26.161697  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1206 10:29:26.161702  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161707  522370 command_runner.go:130] >       "size":  "111333938",
	I1206 10:29:26.161711  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.161719  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.161729  522370 command_runner.go:130] >     },
	I1206 10:29:26.161732  522370 command_runner.go:130] >     {
	I1206 10:29:26.161739  522370 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:29:26.161743  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161748  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:29:26.161751  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161757  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161765  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1206 10:29:26.161774  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:29:26.161777  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161781  522370 command_runner.go:130] >       "size":  "29037500",
	I1206 10:29:26.161785  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.161792  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.161795  522370 command_runner.go:130] >     },
	I1206 10:29:26.161799  522370 command_runner.go:130] >     {
	I1206 10:29:26.161805  522370 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:29:26.161810  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161815  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:29:26.161818  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161822  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161830  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1206 10:29:26.161838  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1206 10:29:26.161843  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161847  522370 command_runner.go:130] >       "size":  "74491780",
	I1206 10:29:26.161851  522370 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:29:26.161856  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.161859  522370 command_runner.go:130] >     },
	I1206 10:29:26.161863  522370 command_runner.go:130] >     {
	I1206 10:29:26.161869  522370 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:29:26.161873  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161878  522370 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:29:26.161883  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161887  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161898  522370 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1206 10:29:26.161905  522370 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1206 10:29:26.161908  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161912  522370 command_runner.go:130] >       "size":  "60857170",
	I1206 10:29:26.161916  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.161920  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.161923  522370 command_runner.go:130] >       },
	I1206 10:29:26.161935  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.161939  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.161942  522370 command_runner.go:130] >     },
	I1206 10:29:26.161946  522370 command_runner.go:130] >     {
	I1206 10:29:26.161953  522370 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:29:26.161956  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161963  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:29:26.161966  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161970  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161978  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1206 10:29:26.161986  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1206 10:29:26.161990  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161994  522370 command_runner.go:130] >       "size":  "84949999",
	I1206 10:29:26.161997  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.162001  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.162004  522370 command_runner.go:130] >       },
	I1206 10:29:26.162008  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162011  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.162014  522370 command_runner.go:130] >     },
	I1206 10:29:26.162018  522370 command_runner.go:130] >     {
	I1206 10:29:26.162024  522370 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:29:26.162028  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.162033  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:29:26.162037  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162041  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.162050  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1206 10:29:26.162067  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1206 10:29:26.162071  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162075  522370 command_runner.go:130] >       "size":  "72170325",
	I1206 10:29:26.162081  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.162091  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.162094  522370 command_runner.go:130] >       },
	I1206 10:29:26.162098  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162102  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.162105  522370 command_runner.go:130] >     },
	I1206 10:29:26.162115  522370 command_runner.go:130] >     {
	I1206 10:29:26.162123  522370 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:29:26.162128  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.162134  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:29:26.162137  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162143  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.162154  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1206 10:29:26.162163  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:29:26.162166  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162170  522370 command_runner.go:130] >       "size":  "74106775",
	I1206 10:29:26.162173  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162178  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.162181  522370 command_runner.go:130] >     },
	I1206 10:29:26.162184  522370 command_runner.go:130] >     {
	I1206 10:29:26.162191  522370 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:29:26.162194  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.162200  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:29:26.162203  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162207  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.162215  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1206 10:29:26.162232  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1206 10:29:26.162235  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162239  522370 command_runner.go:130] >       "size":  "49822549",
	I1206 10:29:26.162243  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.162250  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.162253  522370 command_runner.go:130] >       },
	I1206 10:29:26.162257  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162260  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.162263  522370 command_runner.go:130] >     },
	I1206 10:29:26.162267  522370 command_runner.go:130] >     {
	I1206 10:29:26.162273  522370 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:29:26.162277  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.162281  522370 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:29:26.162284  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162288  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.162296  522370 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1206 10:29:26.162304  522370 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1206 10:29:26.162307  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162311  522370 command_runner.go:130] >       "size":  "519884",
	I1206 10:29:26.162315  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.162318  522370 command_runner.go:130] >         "value":  "65535"
	I1206 10:29:26.162321  522370 command_runner.go:130] >       },
	I1206 10:29:26.162325  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162329  522370 command_runner.go:130] >       "pinned":  true
	I1206 10:29:26.162333  522370 command_runner.go:130] >     }
	I1206 10:29:26.162336  522370 command_runner.go:130] >   ]
	I1206 10:29:26.162339  522370 command_runner.go:130] > }
	I1206 10:29:26.164653  522370 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:29:26.164677  522370 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:29:26.164733  522370 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:29:26.190066  522370 command_runner.go:130] > {
	I1206 10:29:26.190096  522370 command_runner.go:130] >   "images":  [
	I1206 10:29:26.190102  522370 command_runner.go:130] >     {
	I1206 10:29:26.190111  522370 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:29:26.190116  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190122  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:29:26.190126  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190130  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190139  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1206 10:29:26.190147  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1206 10:29:26.190155  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190160  522370 command_runner.go:130] >       "size":  "111333938",
	I1206 10:29:26.190164  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190168  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190171  522370 command_runner.go:130] >     },
	I1206 10:29:26.190174  522370 command_runner.go:130] >     {
	I1206 10:29:26.190181  522370 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:29:26.190184  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190189  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:29:26.190193  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190197  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190205  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1206 10:29:26.190213  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:29:26.190216  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190220  522370 command_runner.go:130] >       "size":  "29037500",
	I1206 10:29:26.190224  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190229  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190232  522370 command_runner.go:130] >     },
	I1206 10:29:26.190235  522370 command_runner.go:130] >     {
	I1206 10:29:26.190241  522370 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:29:26.190245  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190250  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:29:26.190254  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190257  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190265  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1206 10:29:26.190273  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1206 10:29:26.190277  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190281  522370 command_runner.go:130] >       "size":  "74491780",
	I1206 10:29:26.190285  522370 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:29:26.190289  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190292  522370 command_runner.go:130] >     },
	I1206 10:29:26.190295  522370 command_runner.go:130] >     {
	I1206 10:29:26.190301  522370 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:29:26.190308  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190313  522370 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:29:26.190317  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190322  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190329  522370 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1206 10:29:26.190336  522370 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1206 10:29:26.190339  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190343  522370 command_runner.go:130] >       "size":  "60857170",
	I1206 10:29:26.190346  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190350  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.190353  522370 command_runner.go:130] >       },
	I1206 10:29:26.190364  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190369  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190372  522370 command_runner.go:130] >     },
	I1206 10:29:26.190374  522370 command_runner.go:130] >     {
	I1206 10:29:26.190381  522370 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:29:26.190384  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190389  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:29:26.190392  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190396  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190403  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1206 10:29:26.190412  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1206 10:29:26.190415  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190419  522370 command_runner.go:130] >       "size":  "84949999",
	I1206 10:29:26.190422  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190425  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.190428  522370 command_runner.go:130] >       },
	I1206 10:29:26.190432  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190436  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190439  522370 command_runner.go:130] >     },
	I1206 10:29:26.190441  522370 command_runner.go:130] >     {
	I1206 10:29:26.190448  522370 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:29:26.190452  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190460  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:29:26.190464  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190467  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190476  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1206 10:29:26.190484  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1206 10:29:26.190486  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190490  522370 command_runner.go:130] >       "size":  "72170325",
	I1206 10:29:26.190493  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190497  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.190500  522370 command_runner.go:130] >       },
	I1206 10:29:26.190504  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190507  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190514  522370 command_runner.go:130] >     },
	I1206 10:29:26.190517  522370 command_runner.go:130] >     {
	I1206 10:29:26.190524  522370 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:29:26.190528  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190533  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:29:26.190536  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190540  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190547  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1206 10:29:26.190554  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:29:26.190557  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190561  522370 command_runner.go:130] >       "size":  "74106775",
	I1206 10:29:26.190565  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190569  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190572  522370 command_runner.go:130] >     },
	I1206 10:29:26.190574  522370 command_runner.go:130] >     {
	I1206 10:29:26.190581  522370 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:29:26.190584  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190590  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:29:26.190593  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190597  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190604  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1206 10:29:26.190628  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1206 10:29:26.190632  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190636  522370 command_runner.go:130] >       "size":  "49822549",
	I1206 10:29:26.190639  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190643  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.190646  522370 command_runner.go:130] >       },
	I1206 10:29:26.190650  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190653  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190656  522370 command_runner.go:130] >     },
	I1206 10:29:26.190659  522370 command_runner.go:130] >     {
	I1206 10:29:26.190665  522370 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:29:26.190669  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190673  522370 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:29:26.190676  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190680  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190687  522370 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1206 10:29:26.190694  522370 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1206 10:29:26.190697  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190701  522370 command_runner.go:130] >       "size":  "519884",
	I1206 10:29:26.190705  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190709  522370 command_runner.go:130] >         "value":  "65535"
	I1206 10:29:26.190712  522370 command_runner.go:130] >       },
	I1206 10:29:26.190716  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190719  522370 command_runner.go:130] >       "pinned":  true
	I1206 10:29:26.190722  522370 command_runner.go:130] >     }
	I1206 10:29:26.190724  522370 command_runner.go:130] >   ]
	I1206 10:29:26.190728  522370 command_runner.go:130] > }
	I1206 10:29:26.192099  522370 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:29:26.192121  522370 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:29:26.192130  522370 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:29:26.192245  522370 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-123579 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:29:26.192338  522370 ssh_runner.go:195] Run: crio config
	I1206 10:29:26.220366  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.219989922Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1206 10:29:26.220411  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.220176363Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1206 10:29:26.220654  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.22050187Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1206 10:29:26.220871  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.220715248Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1206 10:29:26.221165  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.22098899Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:26.221621  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.221432459Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1206 10:29:26.238478  522370 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1206 10:29:26.263608  522370 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1206 10:29:26.263638  522370 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1206 10:29:26.263647  522370 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1206 10:29:26.263651  522370 command_runner.go:130] > #
	I1206 10:29:26.263687  522370 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1206 10:29:26.263707  522370 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1206 10:29:26.263714  522370 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1206 10:29:26.263721  522370 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1206 10:29:26.263726  522370 command_runner.go:130] > # reload'.
	I1206 10:29:26.263732  522370 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1206 10:29:26.263756  522370 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1206 10:29:26.263778  522370 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1206 10:29:26.263789  522370 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1206 10:29:26.263793  522370 command_runner.go:130] > [crio]
	I1206 10:29:26.263802  522370 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1206 10:29:26.263811  522370 command_runner.go:130] > # containers images, in this directory.
	I1206 10:29:26.263826  522370 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1206 10:29:26.263848  522370 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1206 10:29:26.263868  522370 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1206 10:29:26.263877  522370 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1206 10:29:26.263885  522370 command_runner.go:130] > # imagestore = ""
	I1206 10:29:26.263894  522370 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1206 10:29:26.263901  522370 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1206 10:29:26.263908  522370 command_runner.go:130] > # storage_driver = "overlay"
	I1206 10:29:26.263914  522370 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1206 10:29:26.263920  522370 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1206 10:29:26.263936  522370 command_runner.go:130] > # storage_option = [
	I1206 10:29:26.263952  522370 command_runner.go:130] > # ]
	I1206 10:29:26.263965  522370 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1206 10:29:26.263972  522370 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1206 10:29:26.263985  522370 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1206 10:29:26.263995  522370 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1206 10:29:26.264002  522370 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1206 10:29:26.264006  522370 command_runner.go:130] > # always happen on a node reboot
	I1206 10:29:26.264013  522370 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1206 10:29:26.264036  522370 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1206 10:29:26.264050  522370 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1206 10:29:26.264055  522370 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1206 10:29:26.264060  522370 command_runner.go:130] > # version_file_persist = ""
	I1206 10:29:26.264078  522370 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1206 10:29:26.264092  522370 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1206 10:29:26.264096  522370 command_runner.go:130] > # internal_wipe = true
	I1206 10:29:26.264105  522370 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1206 10:29:26.264113  522370 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1206 10:29:26.264117  522370 command_runner.go:130] > # internal_repair = true
	I1206 10:29:26.264124  522370 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1206 10:29:26.264131  522370 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1206 10:29:26.264150  522370 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1206 10:29:26.264171  522370 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1206 10:29:26.264181  522370 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1206 10:29:26.264188  522370 command_runner.go:130] > [crio.api]
	I1206 10:29:26.264194  522370 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1206 10:29:26.264202  522370 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1206 10:29:26.264208  522370 command_runner.go:130] > # IP address on which the stream server will listen.
	I1206 10:29:26.264214  522370 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1206 10:29:26.264221  522370 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1206 10:29:26.264226  522370 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1206 10:29:26.264241  522370 command_runner.go:130] > # stream_port = "0"
	I1206 10:29:26.264256  522370 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1206 10:29:26.264261  522370 command_runner.go:130] > # stream_enable_tls = false
	I1206 10:29:26.264279  522370 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1206 10:29:26.264295  522370 command_runner.go:130] > # stream_idle_timeout = ""
	I1206 10:29:26.264302  522370 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1206 10:29:26.264317  522370 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1206 10:29:26.264326  522370 command_runner.go:130] > # stream_tls_cert = ""
	I1206 10:29:26.264332  522370 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1206 10:29:26.264338  522370 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1206 10:29:26.264355  522370 command_runner.go:130] > # stream_tls_key = ""
	I1206 10:29:26.264373  522370 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1206 10:29:26.264389  522370 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1206 10:29:26.264395  522370 command_runner.go:130] > # automatically pick up the changes.
	I1206 10:29:26.264399  522370 command_runner.go:130] > # stream_tls_ca = ""
	I1206 10:29:26.264435  522370 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1206 10:29:26.264448  522370 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1206 10:29:26.264456  522370 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1206 10:29:26.264460  522370 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1206 10:29:26.264467  522370 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1206 10:29:26.264476  522370 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1206 10:29:26.264479  522370 command_runner.go:130] > [crio.runtime]
	I1206 10:29:26.264489  522370 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1206 10:29:26.264495  522370 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1206 10:29:26.264506  522370 command_runner.go:130] > # "nofile=1024:2048"
	I1206 10:29:26.264513  522370 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1206 10:29:26.264524  522370 command_runner.go:130] > # default_ulimits = [
	I1206 10:29:26.264527  522370 command_runner.go:130] > # ]
	I1206 10:29:26.264534  522370 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1206 10:29:26.264543  522370 command_runner.go:130] > # no_pivot = false
	I1206 10:29:26.264549  522370 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1206 10:29:26.264555  522370 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1206 10:29:26.264561  522370 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1206 10:29:26.264569  522370 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1206 10:29:26.264576  522370 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1206 10:29:26.264584  522370 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 10:29:26.264591  522370 command_runner.go:130] > # conmon = ""
	I1206 10:29:26.264595  522370 command_runner.go:130] > # Cgroup setting for conmon
	I1206 10:29:26.264602  522370 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1206 10:29:26.264612  522370 command_runner.go:130] > conmon_cgroup = "pod"
	I1206 10:29:26.264623  522370 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1206 10:29:26.264629  522370 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1206 10:29:26.264643  522370 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 10:29:26.264647  522370 command_runner.go:130] > # conmon_env = [
	I1206 10:29:26.264650  522370 command_runner.go:130] > # ]
	I1206 10:29:26.264655  522370 command_runner.go:130] > # Additional environment variables to set for all the
	I1206 10:29:26.264660  522370 command_runner.go:130] > # containers. These are overridden if set in the
	I1206 10:29:26.264668  522370 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1206 10:29:26.264674  522370 command_runner.go:130] > # default_env = [
	I1206 10:29:26.264677  522370 command_runner.go:130] > # ]
	I1206 10:29:26.264683  522370 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1206 10:29:26.264699  522370 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1206 10:29:26.264703  522370 command_runner.go:130] > # selinux = false
	I1206 10:29:26.264710  522370 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1206 10:29:26.264720  522370 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1206 10:29:26.264729  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.264734  522370 command_runner.go:130] > # seccomp_profile = ""
	I1206 10:29:26.264740  522370 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1206 10:29:26.264745  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.264751  522370 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1206 10:29:26.264759  522370 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1206 10:29:26.264767  522370 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1206 10:29:26.264774  522370 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1206 10:29:26.264789  522370 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1206 10:29:26.264794  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.264799  522370 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1206 10:29:26.264807  522370 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1206 10:29:26.264817  522370 command_runner.go:130] > # the cgroup blockio controller.
	I1206 10:29:26.264821  522370 command_runner.go:130] > # blockio_config_file = ""
	I1206 10:29:26.264828  522370 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1206 10:29:26.264834  522370 command_runner.go:130] > # blockio parameters.
	I1206 10:29:26.264838  522370 command_runner.go:130] > # blockio_reload = false
	I1206 10:29:26.264849  522370 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1206 10:29:26.264856  522370 command_runner.go:130] > # irqbalance daemon.
	I1206 10:29:26.264862  522370 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1206 10:29:26.264868  522370 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1206 10:29:26.264877  522370 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1206 10:29:26.264889  522370 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1206 10:29:26.264897  522370 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1206 10:29:26.264904  522370 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1206 10:29:26.264910  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.264917  522370 command_runner.go:130] > # rdt_config_file = ""
	I1206 10:29:26.264922  522370 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1206 10:29:26.264926  522370 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1206 10:29:26.264932  522370 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1206 10:29:26.264936  522370 command_runner.go:130] > # separate_pull_cgroup = ""
	I1206 10:29:26.264946  522370 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1206 10:29:26.264954  522370 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1206 10:29:26.264958  522370 command_runner.go:130] > # will be added.
	I1206 10:29:26.264966  522370 command_runner.go:130] > # default_capabilities = [
	I1206 10:29:26.264970  522370 command_runner.go:130] > # 	"CHOWN",
	I1206 10:29:26.264974  522370 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1206 10:29:26.264986  522370 command_runner.go:130] > # 	"FSETID",
	I1206 10:29:26.264990  522370 command_runner.go:130] > # 	"FOWNER",
	I1206 10:29:26.264993  522370 command_runner.go:130] > # 	"SETGID",
	I1206 10:29:26.264996  522370 command_runner.go:130] > # 	"SETUID",
	I1206 10:29:26.265019  522370 command_runner.go:130] > # 	"SETPCAP",
	I1206 10:29:26.265029  522370 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1206 10:29:26.265035  522370 command_runner.go:130] > # 	"KILL",
	I1206 10:29:26.265038  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265046  522370 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1206 10:29:26.265056  522370 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1206 10:29:26.265061  522370 command_runner.go:130] > # add_inheritable_capabilities = false
	I1206 10:29:26.265069  522370 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1206 10:29:26.265075  522370 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 10:29:26.265088  522370 command_runner.go:130] > default_sysctls = [
	I1206 10:29:26.265093  522370 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1206 10:29:26.265096  522370 command_runner.go:130] > ]
	I1206 10:29:26.265101  522370 command_runner.go:130] > # List of devices on the host that a
	I1206 10:29:26.265110  522370 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1206 10:29:26.265114  522370 command_runner.go:130] > # allowed_devices = [
	I1206 10:29:26.265118  522370 command_runner.go:130] > # 	"/dev/fuse",
	I1206 10:29:26.265123  522370 command_runner.go:130] > # 	"/dev/net/tun",
	I1206 10:29:26.265127  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265134  522370 command_runner.go:130] > # List of additional devices. specified as
	I1206 10:29:26.265142  522370 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1206 10:29:26.265150  522370 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1206 10:29:26.265156  522370 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 10:29:26.265160  522370 command_runner.go:130] > # additional_devices = [
	I1206 10:29:26.265164  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265169  522370 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1206 10:29:26.265179  522370 command_runner.go:130] > # cdi_spec_dirs = [
	I1206 10:29:26.265184  522370 command_runner.go:130] > # 	"/etc/cdi",
	I1206 10:29:26.265188  522370 command_runner.go:130] > # 	"/var/run/cdi",
	I1206 10:29:26.265194  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265200  522370 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1206 10:29:26.265206  522370 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1206 10:29:26.265213  522370 command_runner.go:130] > # Defaults to false.
	I1206 10:29:26.265218  522370 command_runner.go:130] > # device_ownership_from_security_context = false
	I1206 10:29:26.265225  522370 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1206 10:29:26.265233  522370 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1206 10:29:26.265237  522370 command_runner.go:130] > # hooks_dir = [
	I1206 10:29:26.265245  522370 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1206 10:29:26.265248  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265264  522370 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1206 10:29:26.265271  522370 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1206 10:29:26.265277  522370 command_runner.go:130] > # its default mounts from the following two files:
	I1206 10:29:26.265282  522370 command_runner.go:130] > #
	I1206 10:29:26.265293  522370 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1206 10:29:26.265302  522370 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1206 10:29:26.265309  522370 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1206 10:29:26.265312  522370 command_runner.go:130] > #
	I1206 10:29:26.265319  522370 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1206 10:29:26.265333  522370 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1206 10:29:26.265340  522370 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1206 10:29:26.265345  522370 command_runner.go:130] > #      only add mounts it finds in this file.
	I1206 10:29:26.265351  522370 command_runner.go:130] > #
	I1206 10:29:26.265355  522370 command_runner.go:130] > # default_mounts_file = ""
	I1206 10:29:26.265360  522370 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1206 10:29:26.265367  522370 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1206 10:29:26.265371  522370 command_runner.go:130] > # pids_limit = -1
	I1206 10:29:26.265378  522370 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1206 10:29:26.265386  522370 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1206 10:29:26.265392  522370 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1206 10:29:26.265403  522370 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1206 10:29:26.265407  522370 command_runner.go:130] > # log_size_max = -1
	I1206 10:29:26.265416  522370 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1206 10:29:26.265423  522370 command_runner.go:130] > # log_to_journald = false
	I1206 10:29:26.265431  522370 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1206 10:29:26.265437  522370 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1206 10:29:26.265448  522370 command_runner.go:130] > # Path to directory for container attach sockets.
	I1206 10:29:26.265453  522370 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1206 10:29:26.265458  522370 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1206 10:29:26.265464  522370 command_runner.go:130] > # bind_mount_prefix = ""
	I1206 10:29:26.265470  522370 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1206 10:29:26.265476  522370 command_runner.go:130] > # read_only = false
	I1206 10:29:26.265482  522370 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1206 10:29:26.265491  522370 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1206 10:29:26.265495  522370 command_runner.go:130] > # live configuration reload.
	I1206 10:29:26.265508  522370 command_runner.go:130] > # log_level = "info"
	I1206 10:29:26.265514  522370 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1206 10:29:26.265523  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.265529  522370 command_runner.go:130] > # log_filter = ""
	I1206 10:29:26.265536  522370 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1206 10:29:26.265542  522370 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1206 10:29:26.265548  522370 command_runner.go:130] > # separated by comma.
	I1206 10:29:26.265557  522370 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:29:26.265564  522370 command_runner.go:130] > # uid_mappings = ""
	I1206 10:29:26.265570  522370 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1206 10:29:26.265578  522370 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1206 10:29:26.265586  522370 command_runner.go:130] > # separated by comma.
	I1206 10:29:26.265597  522370 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:29:26.265602  522370 command_runner.go:130] > # gid_mappings = ""
	I1206 10:29:26.265611  522370 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1206 10:29:26.265620  522370 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 10:29:26.265626  522370 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 10:29:26.265635  522370 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:29:26.265642  522370 command_runner.go:130] > # minimum_mappable_uid = -1
	I1206 10:29:26.265648  522370 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1206 10:29:26.265656  522370 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 10:29:26.265663  522370 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 10:29:26.265680  522370 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:29:26.265684  522370 command_runner.go:130] > # minimum_mappable_gid = -1
	I1206 10:29:26.265691  522370 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1206 10:29:26.265701  522370 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1206 10:29:26.265707  522370 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1206 10:29:26.265713  522370 command_runner.go:130] > # ctr_stop_timeout = 30
	I1206 10:29:26.265719  522370 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1206 10:29:26.265727  522370 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1206 10:29:26.265733  522370 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1206 10:29:26.265740  522370 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1206 10:29:26.265747  522370 command_runner.go:130] > # drop_infra_ctr = true
	I1206 10:29:26.265754  522370 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1206 10:29:26.265768  522370 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1206 10:29:26.265780  522370 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1206 10:29:26.265787  522370 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1206 10:29:26.265794  522370 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1206 10:29:26.265801  522370 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1206 10:29:26.265809  522370 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1206 10:29:26.265814  522370 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1206 10:29:26.265818  522370 command_runner.go:130] > # shared_cpuset = ""
	I1206 10:29:26.265824  522370 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1206 10:29:26.265832  522370 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1206 10:29:26.265838  522370 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1206 10:29:26.265846  522370 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1206 10:29:26.265857  522370 command_runner.go:130] > # pinns_path = ""
	I1206 10:29:26.265863  522370 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1206 10:29:26.265869  522370 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1206 10:29:26.265874  522370 command_runner.go:130] > # enable_criu_support = true
	I1206 10:29:26.265881  522370 command_runner.go:130] > # Enable/disable the generation of the container,
	I1206 10:29:26.265887  522370 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1206 10:29:26.265894  522370 command_runner.go:130] > # enable_pod_events = false
	I1206 10:29:26.265901  522370 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1206 10:29:26.265906  522370 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1206 10:29:26.265910  522370 command_runner.go:130] > # default_runtime = "crun"
	I1206 10:29:26.265915  522370 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1206 10:29:26.265925  522370 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1206 10:29:26.265945  522370 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1206 10:29:26.265951  522370 command_runner.go:130] > # creation as a file is not desired either.
	I1206 10:29:26.265960  522370 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1206 10:29:26.265970  522370 command_runner.go:130] > # the hostname is being managed dynamically.
	I1206 10:29:26.265974  522370 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1206 10:29:26.265977  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265984  522370 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1206 10:29:26.265993  522370 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1206 10:29:26.265999  522370 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1206 10:29:26.266004  522370 command_runner.go:130] > # Each entry in the table should follow the format:
	I1206 10:29:26.266011  522370 command_runner.go:130] > #
	I1206 10:29:26.266019  522370 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1206 10:29:26.266024  522370 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1206 10:29:26.266030  522370 command_runner.go:130] > # runtime_type = "oci"
	I1206 10:29:26.266035  522370 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1206 10:29:26.266042  522370 command_runner.go:130] > # inherit_default_runtime = false
	I1206 10:29:26.266047  522370 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1206 10:29:26.266059  522370 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1206 10:29:26.266065  522370 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1206 10:29:26.266068  522370 command_runner.go:130] > # monitor_env = []
	I1206 10:29:26.266080  522370 command_runner.go:130] > # privileged_without_host_devices = false
	I1206 10:29:26.266084  522370 command_runner.go:130] > # allowed_annotations = []
	I1206 10:29:26.266090  522370 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1206 10:29:26.266094  522370 command_runner.go:130] > # no_sync_log = false
	I1206 10:29:26.266098  522370 command_runner.go:130] > # default_annotations = {}
	I1206 10:29:26.266105  522370 command_runner.go:130] > # stream_websockets = false
	I1206 10:29:26.266112  522370 command_runner.go:130] > # seccomp_profile = ""
	I1206 10:29:26.266145  522370 command_runner.go:130] > # Where:
	I1206 10:29:26.266155  522370 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1206 10:29:26.266162  522370 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1206 10:29:26.266168  522370 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1206 10:29:26.266182  522370 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1206 10:29:26.266186  522370 command_runner.go:130] > #   in $PATH.
	I1206 10:29:26.266192  522370 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1206 10:29:26.266199  522370 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1206 10:29:26.266206  522370 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1206 10:29:26.266212  522370 command_runner.go:130] > #   state.
	I1206 10:29:26.266218  522370 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1206 10:29:26.266224  522370 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1206 10:29:26.266232  522370 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1206 10:29:26.266239  522370 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1206 10:29:26.266247  522370 command_runner.go:130] > #   the values from the default runtime on load time.
	I1206 10:29:26.266254  522370 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1206 10:29:26.266265  522370 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1206 10:29:26.266275  522370 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1206 10:29:26.266283  522370 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1206 10:29:26.266287  522370 command_runner.go:130] > #   The currently recognized values are:
	I1206 10:29:26.266294  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1206 10:29:26.266304  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1206 10:29:26.266315  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1206 10:29:26.266324  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1206 10:29:26.266332  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1206 10:29:26.266339  522370 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1206 10:29:26.266348  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1206 10:29:26.266356  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1206 10:29:26.266368  522370 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1206 10:29:26.266375  522370 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1206 10:29:26.266382  522370 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1206 10:29:26.266388  522370 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1206 10:29:26.266394  522370 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1206 10:29:26.266410  522370 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1206 10:29:26.266417  522370 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1206 10:29:26.266425  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1206 10:29:26.266435  522370 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1206 10:29:26.266440  522370 command_runner.go:130] > #   deprecated option "conmon".
	I1206 10:29:26.266447  522370 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1206 10:29:26.266455  522370 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1206 10:29:26.266463  522370 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1206 10:29:26.266467  522370 command_runner.go:130] > #   should be moved to the container's cgroup
	I1206 10:29:26.266475  522370 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1206 10:29:26.266479  522370 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1206 10:29:26.266489  522370 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1206 10:29:26.266501  522370 command_runner.go:130] > #   conmon-rs by using:
	I1206 10:29:26.266510  522370 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1206 10:29:26.266520  522370 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1206 10:29:26.266531  522370 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1206 10:29:26.266542  522370 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1206 10:29:26.266552  522370 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1206 10:29:26.266559  522370 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1206 10:29:26.266571  522370 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1206 10:29:26.266585  522370 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1206 10:29:26.266593  522370 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1206 10:29:26.266603  522370 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1206 10:29:26.266610  522370 command_runner.go:130] > #   when a machine crash happens.
	I1206 10:29:26.266617  522370 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1206 10:29:26.266625  522370 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1206 10:29:26.266636  522370 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1206 10:29:26.266641  522370 command_runner.go:130] > #   seccomp profile for the runtime.
	I1206 10:29:26.266647  522370 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1206 10:29:26.266656  522370 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1206 10:29:26.266660  522370 command_runner.go:130] > #
	I1206 10:29:26.266665  522370 command_runner.go:130] > # Using the seccomp notifier feature:
	I1206 10:29:26.266675  522370 command_runner.go:130] > #
	I1206 10:29:26.266682  522370 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1206 10:29:26.266689  522370 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1206 10:29:26.266694  522370 command_runner.go:130] > #
	I1206 10:29:26.266701  522370 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1206 10:29:26.266708  522370 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1206 10:29:26.266711  522370 command_runner.go:130] > #
	I1206 10:29:26.266718  522370 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1206 10:29:26.266723  522370 command_runner.go:130] > # feature.
	I1206 10:29:26.266726  522370 command_runner.go:130] > #
	I1206 10:29:26.266732  522370 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1206 10:29:26.266739  522370 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1206 10:29:26.266747  522370 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1206 10:29:26.266754  522370 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1206 10:29:26.266763  522370 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1206 10:29:26.266768  522370 command_runner.go:130] > #
	I1206 10:29:26.266774  522370 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1206 10:29:26.266786  522370 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1206 10:29:26.266792  522370 command_runner.go:130] > #
	I1206 10:29:26.266800  522370 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1206 10:29:26.266806  522370 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1206 10:29:26.266809  522370 command_runner.go:130] > #
	I1206 10:29:26.266815  522370 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1206 10:29:26.266825  522370 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1206 10:29:26.266831  522370 command_runner.go:130] > # limitation.
	I1206 10:29:26.266835  522370 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1206 10:29:26.266848  522370 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1206 10:29:26.266853  522370 command_runner.go:130] > runtime_type = ""
	I1206 10:29:26.266856  522370 command_runner.go:130] > runtime_root = "/run/crun"
	I1206 10:29:26.266862  522370 command_runner.go:130] > inherit_default_runtime = false
	I1206 10:29:26.266868  522370 command_runner.go:130] > runtime_config_path = ""
	I1206 10:29:26.266873  522370 command_runner.go:130] > container_min_memory = ""
	I1206 10:29:26.266880  522370 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1206 10:29:26.266884  522370 command_runner.go:130] > monitor_cgroup = "pod"
	I1206 10:29:26.266889  522370 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 10:29:26.266892  522370 command_runner.go:130] > allowed_annotations = [
	I1206 10:29:26.266897  522370 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1206 10:29:26.266900  522370 command_runner.go:130] > ]
	I1206 10:29:26.266904  522370 command_runner.go:130] > privileged_without_host_devices = false
	I1206 10:29:26.266911  522370 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1206 10:29:26.266916  522370 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1206 10:29:26.266921  522370 command_runner.go:130] > runtime_type = ""
	I1206 10:29:26.266932  522370 command_runner.go:130] > runtime_root = "/run/runc"
	I1206 10:29:26.266939  522370 command_runner.go:130] > inherit_default_runtime = false
	I1206 10:29:26.266943  522370 command_runner.go:130] > runtime_config_path = ""
	I1206 10:29:26.266947  522370 command_runner.go:130] > container_min_memory = ""
	I1206 10:29:26.266952  522370 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1206 10:29:26.266961  522370 command_runner.go:130] > monitor_cgroup = "pod"
	I1206 10:29:26.266966  522370 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 10:29:26.266970  522370 command_runner.go:130] > privileged_without_host_devices = false
	I1206 10:29:26.266981  522370 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1206 10:29:26.266987  522370 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1206 10:29:26.266995  522370 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1206 10:29:26.267006  522370 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1206 10:29:26.267024  522370 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1206 10:29:26.267035  522370 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1206 10:29:26.267047  522370 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1206 10:29:26.267054  522370 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1206 10:29:26.267063  522370 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1206 10:29:26.267072  522370 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1206 10:29:26.267080  522370 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1206 10:29:26.267087  522370 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1206 10:29:26.267094  522370 command_runner.go:130] > # Example:
	I1206 10:29:26.267098  522370 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1206 10:29:26.267103  522370 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1206 10:29:26.267108  522370 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1206 10:29:26.267132  522370 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1206 10:29:26.267141  522370 command_runner.go:130] > # cpuset = "0-1"
	I1206 10:29:26.267145  522370 command_runner.go:130] > # cpushares = "5"
	I1206 10:29:26.267149  522370 command_runner.go:130] > # cpuquota = "1000"
	I1206 10:29:26.267152  522370 command_runner.go:130] > # cpuperiod = "100000"
	I1206 10:29:26.267156  522370 command_runner.go:130] > # cpulimit = "35"
	I1206 10:29:26.267159  522370 command_runner.go:130] > # Where:
	I1206 10:29:26.267165  522370 command_runner.go:130] > # The workload name is workload-type.
	I1206 10:29:26.267172  522370 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1206 10:29:26.267181  522370 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1206 10:29:26.267188  522370 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1206 10:29:26.267199  522370 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1206 10:29:26.267205  522370 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1206 10:29:26.267210  522370 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1206 10:29:26.267224  522370 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1206 10:29:26.267229  522370 command_runner.go:130] > # Default value is set to true
	I1206 10:29:26.267234  522370 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1206 10:29:26.267244  522370 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1206 10:29:26.267251  522370 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1206 10:29:26.267255  522370 command_runner.go:130] > # Default value is set to 'false'
	I1206 10:29:26.267260  522370 command_runner.go:130] > # disable_hostport_mapping = false
	I1206 10:29:26.267265  522370 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1206 10:29:26.267277  522370 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1206 10:29:26.267283  522370 command_runner.go:130] > # timezone = ""
	I1206 10:29:26.267290  522370 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1206 10:29:26.267293  522370 command_runner.go:130] > #
	I1206 10:29:26.267299  522370 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1206 10:29:26.267310  522370 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1206 10:29:26.267313  522370 command_runner.go:130] > [crio.image]
	I1206 10:29:26.267319  522370 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1206 10:29:26.267324  522370 command_runner.go:130] > # default_transport = "docker://"
	I1206 10:29:26.267332  522370 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1206 10:29:26.267339  522370 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1206 10:29:26.267343  522370 command_runner.go:130] > # global_auth_file = ""
	I1206 10:29:26.267351  522370 command_runner.go:130] > # The image used to instantiate infra containers.
	I1206 10:29:26.267359  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.267364  522370 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1206 10:29:26.267378  522370 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1206 10:29:26.267385  522370 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1206 10:29:26.267396  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.267401  522370 command_runner.go:130] > # pause_image_auth_file = ""
	I1206 10:29:26.267407  522370 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1206 10:29:26.267413  522370 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1206 10:29:26.267421  522370 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1206 10:29:26.267427  522370 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1206 10:29:26.267434  522370 command_runner.go:130] > # pause_command = "/pause"
	I1206 10:29:26.267440  522370 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1206 10:29:26.267447  522370 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1206 10:29:26.267455  522370 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1206 10:29:26.267461  522370 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1206 10:29:26.267471  522370 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1206 10:29:26.267480  522370 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1206 10:29:26.267484  522370 command_runner.go:130] > # pinned_images = [
	I1206 10:29:26.267488  522370 command_runner.go:130] > # ]
	I1206 10:29:26.267494  522370 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1206 10:29:26.267502  522370 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1206 10:29:26.267509  522370 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1206 10:29:26.267517  522370 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1206 10:29:26.267525  522370 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1206 10:29:26.267530  522370 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1206 10:29:26.267538  522370 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1206 10:29:26.267548  522370 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1206 10:29:26.267556  522370 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1206 10:29:26.267566  522370 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1206 10:29:26.267572  522370 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1206 10:29:26.267579  522370 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1206 10:29:26.267587  522370 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1206 10:29:26.267594  522370 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1206 10:29:26.267597  522370 command_runner.go:130] > # changing them here.
	I1206 10:29:26.267603  522370 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1206 10:29:26.267608  522370 command_runner.go:130] > # insecure_registries = [
	I1206 10:29:26.267613  522370 command_runner.go:130] > # ]
	I1206 10:29:26.267620  522370 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1206 10:29:26.267637  522370 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1206 10:29:26.267641  522370 command_runner.go:130] > # image_volumes = "mkdir"
	I1206 10:29:26.267646  522370 command_runner.go:130] > # Temporary directory to use for storing big files
	I1206 10:29:26.267671  522370 command_runner.go:130] > # big_files_temporary_dir = ""
	I1206 10:29:26.267678  522370 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1206 10:29:26.267687  522370 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1206 10:29:26.267699  522370 command_runner.go:130] > # auto_reload_registries = false
	I1206 10:29:26.267706  522370 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1206 10:29:26.267714  522370 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1206 10:29:26.267723  522370 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1206 10:29:26.267732  522370 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1206 10:29:26.267739  522370 command_runner.go:130] > # The mode of short name resolution.
	I1206 10:29:26.267746  522370 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1206 10:29:26.267753  522370 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1206 10:29:26.267758  522370 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1206 10:29:26.267766  522370 command_runner.go:130] > # short_name_mode = "enforcing"
	I1206 10:29:26.267775  522370 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1206 10:29:26.267781  522370 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1206 10:29:26.267788  522370 command_runner.go:130] > # oci_artifact_mount_support = true
	I1206 10:29:26.267795  522370 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1206 10:29:26.267798  522370 command_runner.go:130] > # CNI plugins.
	I1206 10:29:26.267802  522370 command_runner.go:130] > [crio.network]
	I1206 10:29:26.267808  522370 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1206 10:29:26.267816  522370 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1206 10:29:26.267820  522370 command_runner.go:130] > # cni_default_network = ""
	I1206 10:29:26.267826  522370 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1206 10:29:26.267836  522370 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1206 10:29:26.267842  522370 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1206 10:29:26.267845  522370 command_runner.go:130] > # plugin_dirs = [
	I1206 10:29:26.267853  522370 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1206 10:29:26.267856  522370 command_runner.go:130] > # ]
	I1206 10:29:26.267861  522370 command_runner.go:130] > # List of included pod metrics.
	I1206 10:29:26.267867  522370 command_runner.go:130] > # included_pod_metrics = [
	I1206 10:29:26.267870  522370 command_runner.go:130] > # ]
	I1206 10:29:26.267879  522370 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1206 10:29:26.267885  522370 command_runner.go:130] > [crio.metrics]
	I1206 10:29:26.267890  522370 command_runner.go:130] > # Globally enable or disable metrics support.
	I1206 10:29:26.267897  522370 command_runner.go:130] > # enable_metrics = false
	I1206 10:29:26.267902  522370 command_runner.go:130] > # Specify enabled metrics collectors.
	I1206 10:29:26.267906  522370 command_runner.go:130] > # Per default all metrics are enabled.
	I1206 10:29:26.267912  522370 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1206 10:29:26.267919  522370 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1206 10:29:26.267925  522370 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1206 10:29:26.267938  522370 command_runner.go:130] > # metrics_collectors = [
	I1206 10:29:26.267943  522370 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1206 10:29:26.267947  522370 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1206 10:29:26.267951  522370 command_runner.go:130] > # 	"containers_oom_total",
	I1206 10:29:26.267954  522370 command_runner.go:130] > # 	"processes_defunct",
	I1206 10:29:26.267958  522370 command_runner.go:130] > # 	"operations_total",
	I1206 10:29:26.267962  522370 command_runner.go:130] > # 	"operations_latency_seconds",
	I1206 10:29:26.267966  522370 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1206 10:29:26.267970  522370 command_runner.go:130] > # 	"operations_errors_total",
	I1206 10:29:26.267977  522370 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1206 10:29:26.267981  522370 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1206 10:29:26.267986  522370 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1206 10:29:26.267990  522370 command_runner.go:130] > # 	"image_pulls_success_total",
	I1206 10:29:26.267993  522370 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1206 10:29:26.267997  522370 command_runner.go:130] > # 	"containers_oom_count_total",
	I1206 10:29:26.268003  522370 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1206 10:29:26.268007  522370 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1206 10:29:26.268011  522370 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1206 10:29:26.268014  522370 command_runner.go:130] > # ]
	I1206 10:29:26.268020  522370 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1206 10:29:26.268024  522370 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1206 10:29:26.268029  522370 command_runner.go:130] > # The port on which the metrics server will listen.
	I1206 10:29:26.268032  522370 command_runner.go:130] > # metrics_port = 9090
	I1206 10:29:26.268037  522370 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1206 10:29:26.268041  522370 command_runner.go:130] > # metrics_socket = ""
	I1206 10:29:26.268046  522370 command_runner.go:130] > # The certificate for the secure metrics server.
	I1206 10:29:26.268052  522370 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1206 10:29:26.268061  522370 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1206 10:29:26.268070  522370 command_runner.go:130] > # certificate on any modification event.
	I1206 10:29:26.268074  522370 command_runner.go:130] > # metrics_cert = ""
	I1206 10:29:26.268079  522370 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1206 10:29:26.268086  522370 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1206 10:29:26.268090  522370 command_runner.go:130] > # metrics_key = ""
	I1206 10:29:26.268099  522370 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1206 10:29:26.268106  522370 command_runner.go:130] > [crio.tracing]
	I1206 10:29:26.268112  522370 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1206 10:29:26.268116  522370 command_runner.go:130] > # enable_tracing = false
	I1206 10:29:26.268121  522370 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1206 10:29:26.268127  522370 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1206 10:29:26.268135  522370 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1206 10:29:26.268143  522370 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1206 10:29:26.268147  522370 command_runner.go:130] > # CRI-O NRI configuration.
	I1206 10:29:26.268150  522370 command_runner.go:130] > [crio.nri]
	I1206 10:29:26.268155  522370 command_runner.go:130] > # Globally enable or disable NRI.
	I1206 10:29:26.268158  522370 command_runner.go:130] > # enable_nri = true
	I1206 10:29:26.268162  522370 command_runner.go:130] > # NRI socket to listen on.
	I1206 10:29:26.268166  522370 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1206 10:29:26.268170  522370 command_runner.go:130] > # NRI plugin directory to use.
	I1206 10:29:26.268174  522370 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1206 10:29:26.268181  522370 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1206 10:29:26.268187  522370 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1206 10:29:26.268195  522370 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1206 10:29:26.268252  522370 command_runner.go:130] > # nri_disable_connections = false
	I1206 10:29:26.268260  522370 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1206 10:29:26.268265  522370 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1206 10:29:26.268270  522370 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1206 10:29:26.268274  522370 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1206 10:29:26.268287  522370 command_runner.go:130] > # NRI default validator configuration.
	I1206 10:29:26.268294  522370 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1206 10:29:26.268307  522370 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1206 10:29:26.268312  522370 command_runner.go:130] > # can be restricted/rejected:
	I1206 10:29:26.268322  522370 command_runner.go:130] > # - OCI hook injection
	I1206 10:29:26.268327  522370 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1206 10:29:26.268333  522370 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1206 10:29:26.268340  522370 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1206 10:29:26.268344  522370 command_runner.go:130] > # - adjustment of linux namespaces
	I1206 10:29:26.268356  522370 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1206 10:29:26.268363  522370 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1206 10:29:26.268368  522370 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1206 10:29:26.268375  522370 command_runner.go:130] > #
	I1206 10:29:26.268380  522370 command_runner.go:130] > # [crio.nri.default_validator]
	I1206 10:29:26.268384  522370 command_runner.go:130] > # nri_enable_default_validator = false
	I1206 10:29:26.268397  522370 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1206 10:29:26.268403  522370 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1206 10:29:26.268408  522370 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1206 10:29:26.268416  522370 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1206 10:29:26.268421  522370 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1206 10:29:26.268425  522370 command_runner.go:130] > # nri_validator_required_plugins = [
	I1206 10:29:26.268431  522370 command_runner.go:130] > # ]
	I1206 10:29:26.268436  522370 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1206 10:29:26.268442  522370 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1206 10:29:26.268446  522370 command_runner.go:130] > [crio.stats]
	I1206 10:29:26.268454  522370 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1206 10:29:26.268465  522370 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1206 10:29:26.268469  522370 command_runner.go:130] > # stats_collection_period = 0
	I1206 10:29:26.268475  522370 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1206 10:29:26.268484  522370 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1206 10:29:26.268489  522370 command_runner.go:130] > # collection_period = 0
	I1206 10:29:26.268581  522370 cni.go:84] Creating CNI manager for ""
	I1206 10:29:26.268595  522370 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:29:26.268620  522370 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:29:26.268646  522370 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-123579 NodeName:functional-123579 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:29:26.268768  522370 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-123579"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:29:26.268849  522370 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:29:26.276198  522370 command_runner.go:130] > kubeadm
	I1206 10:29:26.276217  522370 command_runner.go:130] > kubectl
	I1206 10:29:26.276221  522370 command_runner.go:130] > kubelet
	I1206 10:29:26.277128  522370 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:29:26.277245  522370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:29:26.285085  522370 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:29:26.297894  522370 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:29:26.310811  522370 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1206 10:29:26.323875  522370 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:29:26.327560  522370 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1206 10:29:26.327877  522370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:29:26.463333  522370 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:29:27.181623  522370 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579 for IP: 192.168.49.2
	I1206 10:29:27.181646  522370 certs.go:195] generating shared ca certs ...
	I1206 10:29:27.181662  522370 certs.go:227] acquiring lock for ca certs: {Name:mk654f77abd8383620ce6ddae56f2a6a8c1d96d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:29:27.181794  522370 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key
	I1206 10:29:27.181841  522370 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key
	I1206 10:29:27.181855  522370 certs.go:257] generating profile certs ...
	I1206 10:29:27.181981  522370 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key
	I1206 10:29:27.182049  522370 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key.fda7c087
	I1206 10:29:27.182120  522370 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key
	I1206 10:29:27.182139  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 10:29:27.182178  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 10:29:27.182195  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 10:29:27.182206  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 10:29:27.182221  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 10:29:27.182231  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 10:29:27.182242  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 10:29:27.182252  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 10:29:27.182310  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem (1338 bytes)
	W1206 10:29:27.182343  522370 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068_empty.pem, impossibly tiny 0 bytes
	I1206 10:29:27.182351  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 10:29:27.182391  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:29:27.182420  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:29:27.182445  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem (1675 bytes)
	I1206 10:29:27.182502  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:29:27.182537  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.182553  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem -> /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.182567  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.183155  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:29:27.204776  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:29:27.223807  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:29:27.246828  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 10:29:27.269763  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:29:27.290536  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:29:27.308147  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:29:27.326269  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:29:27.344314  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:29:27.361949  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem --> /usr/share/ca-certificates/488068.pem (1338 bytes)
	I1206 10:29:27.379296  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /usr/share/ca-certificates/4880682.pem (1708 bytes)
	I1206 10:29:27.396825  522370 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:29:27.409539  522370 ssh_runner.go:195] Run: openssl version
	I1206 10:29:27.415501  522370 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1206 10:29:27.415885  522370 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.423483  522370 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488068.pem /etc/ssl/certs/488068.pem
	I1206 10:29:27.431381  522370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.435336  522370 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.435420  522370 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.435491  522370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.477997  522370 command_runner.go:130] > 51391683
	I1206 10:29:27.478450  522370 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:29:27.485910  522370 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.493199  522370 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4880682.pem /etc/ssl/certs/4880682.pem
	I1206 10:29:27.500533  522370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.504197  522370 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.504254  522370 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.504314  522370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.549795  522370 command_runner.go:130] > 3ec20f2e
	I1206 10:29:27.550294  522370 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:29:27.557856  522370 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.565301  522370 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:29:27.572772  522370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.576768  522370 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.576853  522370 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.576925  522370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.618106  522370 command_runner.go:130] > b5213941
	I1206 10:29:27.618536  522370 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:29:27.626130  522370 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:29:27.629702  522370 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:29:27.629728  522370 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1206 10:29:27.629736  522370 command_runner.go:130] > Device: 259,1	Inode: 3640487     Links: 1
	I1206 10:29:27.629742  522370 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:29:27.629749  522370 command_runner.go:130] > Access: 2025-12-06 10:25:18.913466133 +0000
	I1206 10:29:27.629754  522370 command_runner.go:130] > Modify: 2025-12-06 10:21:14.154593310 +0000
	I1206 10:29:27.629758  522370 command_runner.go:130] > Change: 2025-12-06 10:21:14.154593310 +0000
	I1206 10:29:27.629764  522370 command_runner.go:130] >  Birth: 2025-12-06 10:21:14.154593310 +0000
	I1206 10:29:27.629823  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:29:27.670498  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.670941  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:29:27.711871  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.712351  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:29:27.753204  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.753665  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:29:27.795554  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.796089  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:29:27.836809  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.837203  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:29:27.878291  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.878357  522370 kubeadm.go:401] StartCluster: {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:29:27.878433  522370 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:29:27.878503  522370 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:29:27.905835  522370 cri.go:89] found id: ""
	I1206 10:29:27.905910  522370 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:29:27.912750  522370 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 10:29:27.912773  522370 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 10:29:27.912780  522370 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 10:29:27.913690  522370 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:29:27.913706  522370 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:29:27.913783  522370 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:29:27.921335  522370 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:29:27.921755  522370 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-123579" does not appear in /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:27.921867  522370 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-484819/kubeconfig needs updating (will repair): [kubeconfig missing "functional-123579" cluster setting kubeconfig missing "functional-123579" context setting]
	I1206 10:29:27.922200  522370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/kubeconfig: {Name:mk884a72161ed5cd0cfdbffc4a21f277282d705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:29:27.922608  522370 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:27.922766  522370 kapi.go:59] client config for functional-123579: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key", CAFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:29:27.923311  522370 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 10:29:27.923332  522370 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 10:29:27.923338  522370 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 10:29:27.923344  522370 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 10:29:27.923348  522370 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 10:29:27.923710  522370 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:29:27.923805  522370 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1206 10:29:27.932172  522370 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1206 10:29:27.932206  522370 kubeadm.go:602] duration metric: took 18.493373ms to restartPrimaryControlPlane
	I1206 10:29:27.932216  522370 kubeadm.go:403] duration metric: took 53.86688ms to StartCluster
	I1206 10:29:27.932230  522370 settings.go:142] acquiring lock: {Name:mk7eec112652eae38dac4afce804445d9092bd29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:29:27.932300  522370 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:27.932906  522370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/kubeconfig: {Name:mk884a72161ed5cd0cfdbffc4a21f277282d705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:29:27.933111  522370 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:29:27.933400  522370 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:29:27.933457  522370 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:29:27.933598  522370 addons.go:70] Setting storage-provisioner=true in profile "functional-123579"
	I1206 10:29:27.933615  522370 addons.go:239] Setting addon storage-provisioner=true in "functional-123579"
	I1206 10:29:27.933640  522370 host.go:66] Checking if "functional-123579" exists ...
	I1206 10:29:27.933662  522370 addons.go:70] Setting default-storageclass=true in profile "functional-123579"
	I1206 10:29:27.933709  522370 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-123579"
	I1206 10:29:27.934067  522370 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:29:27.934105  522370 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:29:27.937180  522370 out.go:179] * Verifying Kubernetes components...
	I1206 10:29:27.943300  522370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:29:27.955394  522370 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:27.955630  522370 kapi.go:59] client config for functional-123579: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key", CAFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:29:27.955941  522370 addons.go:239] Setting addon default-storageclass=true in "functional-123579"
	I1206 10:29:27.955970  522370 host.go:66] Checking if "functional-123579" exists ...
	I1206 10:29:27.956408  522370 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:29:27.980014  522370 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:29:27.983923  522370 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:27.983954  522370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:29:27.984026  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:27.996144  522370 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:27.996165  522370 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:29:27.996228  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:28.024613  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:28.044906  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:28.158003  522370 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:29:28.171055  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:28.191069  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:28.930363  522370 node_ready.go:35] waiting up to 6m0s for node "functional-123579" to be "Ready" ...
	I1206 10:29:28.930490  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:28.930625  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:28.930666  522370 retry.go:31] will retry after 220.153302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:28.930749  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:28.930787  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:28.930813  522370 retry.go:31] will retry after 205.296978ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:28.930893  522370 type.go:168] "Request Body" body=""
	I1206 10:29:28.930961  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:28.931278  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:29.136761  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:29.151269  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:29.213820  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:29.217541  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.217581  522370 retry.go:31] will retry after 414.855546ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.235243  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:29.235363  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.235412  522370 retry.go:31] will retry after 542.074768ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.431607  522370 type.go:168] "Request Body" body=""
	I1206 10:29:29.431755  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:29.432098  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:29.633557  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:29.704871  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:29.715208  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.715276  522370 retry.go:31] will retry after 512.072151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.778572  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:29.842567  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:29.842631  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.842656  522370 retry.go:31] will retry after 453.896864ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.930817  522370 type.go:168] "Request Body" body=""
	I1206 10:29:29.930917  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:29.931386  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:30.227644  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:30.292361  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:30.292404  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:30.292441  522370 retry.go:31] will retry after 965.22043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:30.297573  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:30.354035  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:30.357760  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:30.357796  522370 retry.go:31] will retry after 830.21573ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:30.430970  522370 type.go:168] "Request Body" body=""
	I1206 10:29:30.431039  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:30.431358  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:30.930753  522370 type.go:168] "Request Body" body=""
	I1206 10:29:30.930859  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:30.931201  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:30.931272  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:31.188810  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:31.258540  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:31.280251  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:31.280382  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:31.280411  522370 retry.go:31] will retry after 670.25639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:31.331402  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:31.331517  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:31.331545  522370 retry.go:31] will retry after 1.065706699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:31.430665  522370 type.go:168] "Request Body" body=""
	I1206 10:29:31.430772  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:31.431166  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:31.930712  522370 type.go:168] "Request Body" body=""
	I1206 10:29:31.930893  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:31.931401  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:31.951563  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:32.028942  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:32.028998  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:32.029018  522370 retry.go:31] will retry after 2.122665166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:32.397466  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:32.431043  522370 type.go:168] "Request Body" body=""
	I1206 10:29:32.431193  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:32.431584  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:32.458856  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:32.458892  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:32.458911  522370 retry.go:31] will retry after 1.728877951s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:32.931628  522370 type.go:168] "Request Body" body=""
	I1206 10:29:32.931705  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:32.932104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:32.932161  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:33.430893  522370 type.go:168] "Request Body" body=""
	I1206 10:29:33.430960  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:33.431324  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:33.930780  522370 type.go:168] "Request Body" body=""
	I1206 10:29:33.930858  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:33.931279  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:34.152755  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:34.188350  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:34.249027  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:34.249069  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:34.249090  522370 retry.go:31] will retry after 3.684646027s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:34.294198  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:34.294244  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:34.294296  522370 retry.go:31] will retry after 1.427612825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:34.431504  522370 type.go:168] "Request Body" body=""
	I1206 10:29:34.431583  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:34.431952  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:34.930685  522370 type.go:168] "Request Body" body=""
	I1206 10:29:34.930753  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:34.931043  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:35.430737  522370 type.go:168] "Request Body" body=""
	I1206 10:29:35.430834  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:35.431191  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:35.431258  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:35.722778  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:35.786215  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:35.786258  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:35.786277  522370 retry.go:31] will retry after 5.772571648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:35.931559  522370 type.go:168] "Request Body" body=""
	I1206 10:29:35.931640  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:35.931966  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:36.431586  522370 type.go:168] "Request Body" body=""
	I1206 10:29:36.431654  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:36.431914  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:36.930676  522370 type.go:168] "Request Body" body=""
	I1206 10:29:36.930756  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:36.931086  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:37.430781  522370 type.go:168] "Request Body" body=""
	I1206 10:29:37.430858  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:37.431219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:37.931472  522370 type.go:168] "Request Body" body=""
	I1206 10:29:37.931560  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:37.931882  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:37.931937  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:37.934240  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:38.012005  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:38.012049  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:38.012071  522370 retry.go:31] will retry after 2.264254307s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:38.430647  522370 type.go:168] "Request Body" body=""
	I1206 10:29:38.430724  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:38.431052  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:38.930775  522370 type.go:168] "Request Body" body=""
	I1206 10:29:38.930848  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:38.931203  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:39.430809  522370 type.go:168] "Request Body" body=""
	I1206 10:29:39.430884  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:39.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:39.930814  522370 type.go:168] "Request Body" body=""
	I1206 10:29:39.930888  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:39.931197  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:40.276629  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:40.338233  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:40.338274  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:40.338294  522370 retry.go:31] will retry after 6.465617702s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:40.431489  522370 type.go:168] "Request Body" body=""
	I1206 10:29:40.431563  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:40.431893  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:40.431948  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:40.931681  522370 type.go:168] "Request Body" body=""
	I1206 10:29:40.931758  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:40.932017  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:41.430778  522370 type.go:168] "Request Body" body=""
	I1206 10:29:41.430862  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:41.431219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:41.559542  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:41.618815  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:41.618852  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:41.618871  522370 retry.go:31] will retry after 5.212992024s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:41.931382  522370 type.go:168] "Request Body" body=""
	I1206 10:29:41.931461  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:41.931787  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:42.431525  522370 type.go:168] "Request Body" body=""
	I1206 10:29:42.431601  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:42.431866  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:42.931428  522370 type.go:168] "Request Body" body=""
	I1206 10:29:42.931503  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:42.931826  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:42.931883  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:43.431618  522370 type.go:168] "Request Body" body=""
	I1206 10:29:43.431692  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:43.432027  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:43.931348  522370 type.go:168] "Request Body" body=""
	I1206 10:29:43.931423  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:43.931690  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:44.431562  522370 type.go:168] "Request Body" body=""
	I1206 10:29:44.431652  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:44.431999  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:44.930672  522370 type.go:168] "Request Body" body=""
	I1206 10:29:44.930749  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:44.931083  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:45.430826  522370 type.go:168] "Request Body" body=""
	I1206 10:29:45.430904  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:45.431191  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:45.431243  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:45.930931  522370 type.go:168] "Request Body" body=""
	I1206 10:29:45.931023  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:45.931426  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:46.430763  522370 type.go:168] "Request Body" body=""
	I1206 10:29:46.430842  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:46.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:46.804868  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:46.832399  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:46.865940  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:46.865975  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:46.865994  522370 retry.go:31] will retry after 4.982943882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:46.906567  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:46.906612  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:46.906632  522370 retry.go:31] will retry after 5.755281988s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:46.930748  522370 type.go:168] "Request Body" body=""
	I1206 10:29:46.930817  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:46.931156  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:47.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:29:47.430851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:47.431185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:47.931383  522370 type.go:168] "Request Body" body=""
	I1206 10:29:47.931460  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:47.931792  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:47.931843  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:48.431576  522370 type.go:168] "Request Body" body=""
	I1206 10:29:48.431652  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:48.431909  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:48.931675  522370 type.go:168] "Request Body" body=""
	I1206 10:29:48.931755  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:48.932083  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:49.430777  522370 type.go:168] "Request Body" body=""
	I1206 10:29:49.430862  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:49.431211  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:49.930899  522370 type.go:168] "Request Body" body=""
	I1206 10:29:49.930969  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:49.931292  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:50.430989  522370 type.go:168] "Request Body" body=""
	I1206 10:29:50.431065  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:50.431426  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:50.431484  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:50.930772  522370 type.go:168] "Request Body" body=""
	I1206 10:29:50.930857  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:50.931213  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:51.430766  522370 type.go:168] "Request Body" body=""
	I1206 10:29:51.430838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:51.431095  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:51.849751  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:51.909824  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:51.909861  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:51.909882  522370 retry.go:31] will retry after 17.161477779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:51.930951  522370 type.go:168] "Request Body" body=""
	I1206 10:29:51.931035  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:51.931342  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:52.431051  522370 type.go:168] "Request Body" body=""
	I1206 10:29:52.431146  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:52.431458  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:52.431512  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:52.663117  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:52.730608  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:52.730656  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:52.730678  522370 retry.go:31] will retry after 12.860735555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:52.931180  522370 type.go:168] "Request Body" body=""
	I1206 10:29:52.931254  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:52.931513  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:53.431586  522370 type.go:168] "Request Body" body=""
	I1206 10:29:53.431665  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:53.432017  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:53.930759  522370 type.go:168] "Request Body" body=""
	I1206 10:29:53.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:53.931169  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:54.430719  522370 type.go:168] "Request Body" body=""
	I1206 10:29:54.430787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:54.431095  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:54.930744  522370 type.go:168] "Request Body" body=""
	I1206 10:29:54.930824  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:54.931164  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:54.931216  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:55.430912  522370 type.go:168] "Request Body" body=""
	I1206 10:29:55.430990  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:55.431336  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:55.930734  522370 type.go:168] "Request Body" body=""
	I1206 10:29:55.930815  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:55.931104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:56.430752  522370 type.go:168] "Request Body" body=""
	I1206 10:29:56.430830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:56.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:56.930913  522370 type.go:168] "Request Body" body=""
	I1206 10:29:56.931011  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:56.931387  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:56.931449  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:57.430732  522370 type.go:168] "Request Body" body=""
	I1206 10:29:57.430809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:57.431149  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:57.931360  522370 type.go:168] "Request Body" body=""
	I1206 10:29:57.931442  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:57.931792  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:58.431399  522370 type.go:168] "Request Body" body=""
	I1206 10:29:58.431472  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:58.431799  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:58.931551  522370 type.go:168] "Request Body" body=""
	I1206 10:29:58.931619  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:58.931871  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:58.931909  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:59.431652  522370 type.go:168] "Request Body" body=""
	I1206 10:29:59.431735  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:59.432062  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:59.930743  522370 type.go:168] "Request Body" body=""
	I1206 10:29:59.930819  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:59.931185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:00.449420  522370 type.go:168] "Request Body" body=""
	I1206 10:30:00.449497  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:00.449815  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:00.931632  522370 type.go:168] "Request Body" body=""
	I1206 10:30:00.931721  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:00.932114  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:00.932186  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:01.430891  522370 type.go:168] "Request Body" body=""
	I1206 10:30:01.430971  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:01.431362  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:01.930910  522370 type.go:168] "Request Body" body=""
	I1206 10:30:01.930981  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:01.931281  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:02.430983  522370 type.go:168] "Request Body" body=""
	I1206 10:30:02.431111  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:02.431463  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:02.931309  522370 type.go:168] "Request Body" body=""
	I1206 10:30:02.931390  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:02.931736  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:03.431532  522370 type.go:168] "Request Body" body=""
	I1206 10:30:03.431608  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:03.431873  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:03.431923  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:03.931662  522370 type.go:168] "Request Body" body=""
	I1206 10:30:03.931740  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:03.932084  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:04.430758  522370 type.go:168] "Request Body" body=""
	I1206 10:30:04.430838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:04.431226  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:04.930979  522370 type.go:168] "Request Body" body=""
	I1206 10:30:04.931048  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:04.931324  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:05.430768  522370 type.go:168] "Request Body" body=""
	I1206 10:30:05.430842  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:05.431235  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:05.591568  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:05.650107  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:05.653722  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:05.653756  522370 retry.go:31] will retry after 16.31009922s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:05.931225  522370 type.go:168] "Request Body" body=""
	I1206 10:30:05.931303  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:05.931640  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:05.931697  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:06.431453  522370 type.go:168] "Request Body" body=""
	I1206 10:30:06.431523  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:06.431774  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:06.931557  522370 type.go:168] "Request Body" body=""
	I1206 10:30:06.931629  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:06.931951  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:07.431619  522370 type.go:168] "Request Body" body=""
	I1206 10:30:07.431700  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:07.432067  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:07.931280  522370 type.go:168] "Request Body" body=""
	I1206 10:30:07.931358  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:07.931625  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:08.431487  522370 type.go:168] "Request Body" body=""
	I1206 10:30:08.431561  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:08.431928  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:08.431989  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:08.930675  522370 type.go:168] "Request Body" body=""
	I1206 10:30:08.930751  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:08.931076  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:09.072554  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:09.131495  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:09.131531  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:09.131550  522370 retry.go:31] will retry after 16.873374267s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:09.430840  522370 type.go:168] "Request Body" body=""
	I1206 10:30:09.430908  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:09.431218  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:09.930794  522370 type.go:168] "Request Body" body=""
	I1206 10:30:09.930868  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:09.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:10.430728  522370 type.go:168] "Request Body" body=""
	I1206 10:30:10.430802  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:10.431168  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:10.930730  522370 type.go:168] "Request Body" body=""
	I1206 10:30:10.930805  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:10.931062  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:10.931111  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:11.430884  522370 type.go:168] "Request Body" body=""
	I1206 10:30:11.430959  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:11.431276  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:11.930807  522370 type.go:168] "Request Body" body=""
	I1206 10:30:11.930877  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:11.931199  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:12.430821  522370 type.go:168] "Request Body" body=""
	I1206 10:30:12.430897  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:12.431230  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:12.931320  522370 type.go:168] "Request Body" body=""
	I1206 10:30:12.931390  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:12.931738  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:12.931801  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:13.431588  522370 type.go:168] "Request Body" body=""
	I1206 10:30:13.431660  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:13.432007  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:13.930697  522370 type.go:168] "Request Body" body=""
	I1206 10:30:13.930795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:13.931074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:14.430876  522370 type.go:168] "Request Body" body=""
	I1206 10:30:14.430958  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:14.431286  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:14.930809  522370 type.go:168] "Request Body" body=""
	I1206 10:30:14.930888  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:14.931234  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:15.430953  522370 type.go:168] "Request Body" body=""
	I1206 10:30:15.431021  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:15.431299  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:15.431359  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:15.930760  522370 type.go:168] "Request Body" body=""
	I1206 10:30:15.930854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:15.931202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:16.430790  522370 type.go:168] "Request Body" body=""
	I1206 10:30:16.430862  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:16.431183  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:16.930736  522370 type.go:168] "Request Body" body=""
	I1206 10:30:16.930809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:16.931077  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:17.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:30:17.430824  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:17.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:17.931237  522370 type.go:168] "Request Body" body=""
	I1206 10:30:17.931314  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:17.931645  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:17.931700  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:18.431393  522370 type.go:168] "Request Body" body=""
	I1206 10:30:18.431479  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:18.431748  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:18.931581  522370 type.go:168] "Request Body" body=""
	I1206 10:30:18.931653  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:18.931971  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:19.430702  522370 type.go:168] "Request Body" body=""
	I1206 10:30:19.430780  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:19.431097  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:19.930811  522370 type.go:168] "Request Body" body=""
	I1206 10:30:19.930888  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:19.931178  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:20.430768  522370 type.go:168] "Request Body" body=""
	I1206 10:30:20.430839  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:20.431197  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:20.431259  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:20.930943  522370 type.go:168] "Request Body" body=""
	I1206 10:30:20.931019  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:20.931387  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:21.431075  522370 type.go:168] "Request Body" body=""
	I1206 10:30:21.431159  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:21.431476  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:21.930790  522370 type.go:168] "Request Body" body=""
	I1206 10:30:21.930867  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:21.931207  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:21.964425  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:22.031284  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:22.031334  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:22.031356  522370 retry.go:31] will retry after 35.791693435s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:22.430787  522370 type.go:168] "Request Body" body=""
	I1206 10:30:22.430867  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:22.431181  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:22.930968  522370 type.go:168] "Request Body" body=""
	I1206 10:30:22.931043  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:22.931326  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:22.931374  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:23.430789  522370 type.go:168] "Request Body" body=""
	I1206 10:30:23.430884  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:23.431214  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:23.930931  522370 type.go:168] "Request Body" body=""
	I1206 10:30:23.931004  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:23.931354  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:24.430922  522370 type.go:168] "Request Body" body=""
	I1206 10:30:24.430996  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:24.431280  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:24.930769  522370 type.go:168] "Request Body" body=""
	I1206 10:30:24.930844  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:24.931166  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:25.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:30:25.430829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:25.431168  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:25.431230  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:25.931005  522370 type.go:168] "Request Body" body=""
	I1206 10:30:25.931194  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:25.932226  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:26.005763  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:26.074782  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:26.074834  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:26.074855  522370 retry.go:31] will retry after 34.92165894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:26.431288  522370 type.go:168] "Request Body" body=""
	I1206 10:30:26.431390  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:26.431714  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:26.931353  522370 type.go:168] "Request Body" body=""
	I1206 10:30:26.931426  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:26.931758  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:27.431397  522370 type.go:168] "Request Body" body=""
	I1206 10:30:27.431473  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:27.431770  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:27.431821  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:27.931640  522370 type.go:168] "Request Body" body=""
	I1206 10:30:27.931715  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:27.932047  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:28.431697  522370 type.go:168] "Request Body" body=""
	I1206 10:30:28.431771  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:28.432103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:28.930727  522370 type.go:168] "Request Body" body=""
	I1206 10:30:28.930800  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:28.931097  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:29.430756  522370 type.go:168] "Request Body" body=""
	I1206 10:30:29.430856  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:29.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:29.930774  522370 type.go:168] "Request Body" body=""
	I1206 10:30:29.930850  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:29.931176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:29.931223  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:30.430841  522370 type.go:168] "Request Body" body=""
	I1206 10:30:30.430907  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:30.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:30.930749  522370 type.go:168] "Request Body" body=""
	I1206 10:30:30.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:30.931181  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:31.430916  522370 type.go:168] "Request Body" body=""
	I1206 10:30:31.431010  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:31.431428  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:31.931099  522370 type.go:168] "Request Body" body=""
	I1206 10:30:31.931194  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:31.931454  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:31.931504  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:32.431294  522370 type.go:168] "Request Body" body=""
	I1206 10:30:32.431377  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:32.431741  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:32.931507  522370 type.go:168] "Request Body" body=""
	I1206 10:30:32.931587  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:32.931910  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:33.431612  522370 type.go:168] "Request Body" body=""
	I1206 10:30:33.431689  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:33.431967  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:33.930699  522370 type.go:168] "Request Body" body=""
	I1206 10:30:33.930774  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:33.931115  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:34.430875  522370 type.go:168] "Request Body" body=""
	I1206 10:30:34.430956  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:34.431328  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:34.431399  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:34.930728  522370 type.go:168] "Request Body" body=""
	I1206 10:30:34.930826  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:34.931100  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:35.430769  522370 type.go:168] "Request Body" body=""
	I1206 10:30:35.430844  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:35.431198  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:35.930924  522370 type.go:168] "Request Body" body=""
	I1206 10:30:35.931010  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:35.931368  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:36.431048  522370 type.go:168] "Request Body" body=""
	I1206 10:30:36.431167  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:36.431482  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:36.431535  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:36.931272  522370 type.go:168] "Request Body" body=""
	I1206 10:30:36.931345  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:36.931668  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:37.431458  522370 type.go:168] "Request Body" body=""
	I1206 10:30:37.431553  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:37.431867  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:37.931612  522370 type.go:168] "Request Body" body=""
	I1206 10:30:37.931682  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:37.932028  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:38.430754  522370 type.go:168] "Request Body" body=""
	I1206 10:30:38.430831  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:38.431203  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:38.930759  522370 type.go:168] "Request Body" body=""
	I1206 10:30:38.930834  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:38.931173  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:38.931244  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:39.430719  522370 type.go:168] "Request Body" body=""
	I1206 10:30:39.430798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:39.431104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:39.930841  522370 type.go:168] "Request Body" body=""
	I1206 10:30:39.930938  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:39.931315  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:40.431028  522370 type.go:168] "Request Body" body=""
	I1206 10:30:40.431104  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:40.431481  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:40.931230  522370 type.go:168] "Request Body" body=""
	I1206 10:30:40.931298  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:40.931552  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:40.931592  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:41.431348  522370 type.go:168] "Request Body" body=""
	I1206 10:30:41.431446  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:41.431767  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:41.931566  522370 type.go:168] "Request Body" body=""
	I1206 10:30:41.931647  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:41.931976  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:42.431636  522370 type.go:168] "Request Body" body=""
	I1206 10:30:42.431716  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:42.431988  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:42.930987  522370 type.go:168] "Request Body" body=""
	I1206 10:30:42.931066  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:42.931431  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:43.431209  522370 type.go:168] "Request Body" body=""
	I1206 10:30:43.431287  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:43.431648  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:43.431703  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:43.931389  522370 type.go:168] "Request Body" body=""
	I1206 10:30:43.931457  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:43.931727  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:44.431509  522370 type.go:168] "Request Body" body=""
	I1206 10:30:44.431583  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:44.431898  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:44.930652  522370 type.go:168] "Request Body" body=""
	I1206 10:30:44.930726  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:44.931043  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:45.430750  522370 type.go:168] "Request Body" body=""
	I1206 10:30:45.430832  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:45.431185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:45.930735  522370 type.go:168] "Request Body" body=""
	I1206 10:30:45.930816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:45.931167  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:45.931245  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:46.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:30:46.430992  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:46.431364  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:46.930735  522370 type.go:168] "Request Body" body=""
	I1206 10:30:46.930830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:46.931154  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:47.430792  522370 type.go:168] "Request Body" body=""
	I1206 10:30:47.430873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:47.431273  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:47.931290  522370 type.go:168] "Request Body" body=""
	I1206 10:30:47.931389  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:47.931707  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:47.931764  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:48.431531  522370 type.go:168] "Request Body" body=""
	I1206 10:30:48.431600  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:48.431884  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:48.931635  522370 type.go:168] "Request Body" body=""
	I1206 10:30:48.931707  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:48.932051  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:49.430636  522370 type.go:168] "Request Body" body=""
	I1206 10:30:49.430720  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:49.431043  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:49.930721  522370 type.go:168] "Request Body" body=""
	I1206 10:30:49.930793  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:49.931074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:50.430687  522370 type.go:168] "Request Body" body=""
	I1206 10:30:50.430783  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:50.431076  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:50.431162  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:50.930764  522370 type.go:168] "Request Body" body=""
	I1206 10:30:50.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:50.931221  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.430755  522370 type.go:168] "Request Body" body=""
	I1206 10:30:51.430826  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:51.431099  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.930829  522370 type.go:168] "Request Body" body=""
	I1206 10:30:51.930912  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:51.931261  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:52.430981  522370 type.go:168] "Request Body" body=""
	I1206 10:30:52.431081  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:52.431382  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:52.431432  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:52.931312  522370 type.go:168] "Request Body" body=""
	I1206 10:30:52.931405  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:52.931664  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.430694  522370 type.go:168] "Request Body" body=""
	I1206 10:30:53.430779  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:53.431113  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.930852  522370 type.go:168] "Request Body" body=""
	I1206 10:30:53.930925  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:53.931259  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:54.430827  522370 type.go:168] "Request Body" body=""
	I1206 10:30:54.430913  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:54.431229  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:54.930765  522370 type.go:168] "Request Body" body=""
	I1206 10:30:54.930847  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:54.931199  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:54.931254  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:55.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:30:55.431006  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:55.431312  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:55.930988  522370 type.go:168] "Request Body" body=""
	I1206 10:30:55.931078  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:55.931370  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:56.430800  522370 type.go:168] "Request Body" body=""
	I1206 10:30:56.430873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:56.431230  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:56.930928  522370 type.go:168] "Request Body" body=""
	I1206 10:30:56.931021  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:56.931336  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:56.931382  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:57.430743  522370 type.go:168] "Request Body" body=""
	I1206 10:30:57.430812  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:57.431182  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:57.823985  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:57.887311  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:57.891368  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:57.891481  522370 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:30:57.930973  522370 type.go:168] "Request Body" body=""
	I1206 10:30:57.931045  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:57.931345  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:58.430767  522370 type.go:168] "Request Body" body=""
	I1206 10:30:58.430847  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:58.431185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:58.930711  522370 type.go:168] "Request Body" body=""
	I1206 10:30:58.930784  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:58.931072  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:59.430808  522370 type.go:168] "Request Body" body=""
	I1206 10:30:59.430894  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:59.431255  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:59.431320  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:59.930807  522370 type.go:168] "Request Body" body=""
	I1206 10:30:59.930882  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:59.931248  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:00.430992  522370 type.go:168] "Request Body" body=""
	I1206 10:31:00.431085  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:00.431404  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:00.930779  522370 type.go:168] "Request Body" body=""
	I1206 10:31:00.930858  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:00.931174  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:00.997513  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:01.064863  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:01.068488  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:01.068586  522370 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:31:01.073496  522370 out.go:179] * Enabled addons: 
	I1206 10:31:01.076263  522370 addons.go:530] duration metric: took 1m33.142805076s for enable addons: enabled=[]
	I1206 10:31:01.430965  522370 type.go:168] "Request Body" body=""
	I1206 10:31:01.431062  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:01.431429  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:01.431491  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:01.930728  522370 type.go:168] "Request Body" body=""
	I1206 10:31:01.930813  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:01.931075  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:02.430719  522370 type.go:168] "Request Body" body=""
	I1206 10:31:02.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:02.431170  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:02.931199  522370 type.go:168] "Request Body" body=""
	I1206 10:31:02.931311  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:02.931626  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:03.431408  522370 type.go:168] "Request Body" body=""
	I1206 10:31:03.431503  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:03.431775  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:03.431826  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:03.931635  522370 type.go:168] "Request Body" body=""
	I1206 10:31:03.931714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:03.932077  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:04.430812  522370 type.go:168] "Request Body" body=""
	I1206 10:31:04.430889  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:04.431222  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:04.930928  522370 type.go:168] "Request Body" body=""
	I1206 10:31:04.931001  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:04.931294  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:05.430732  522370 type.go:168] "Request Body" body=""
	I1206 10:31:05.430807  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:05.431205  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:05.930777  522370 type.go:168] "Request Body" body=""
	I1206 10:31:05.930859  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:05.931245  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:05.931317  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:06.430961  522370 type.go:168] "Request Body" body=""
	I1206 10:31:06.431031  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:06.431335  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:06.930771  522370 type.go:168] "Request Body" body=""
	I1206 10:31:06.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:06.931212  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:07.430742  522370 type.go:168] "Request Body" body=""
	I1206 10:31:07.430822  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:07.431109  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:07.931277  522370 type.go:168] "Request Body" body=""
	I1206 10:31:07.931353  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:07.931638  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:07.931679  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:08.431521  522370 type.go:168] "Request Body" body=""
	I1206 10:31:08.431597  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:08.431952  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:08.930701  522370 type.go:168] "Request Body" body=""
	I1206 10:31:08.930775  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:08.931170  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:09.430860  522370 type.go:168] "Request Body" body=""
	I1206 10:31:09.430943  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:09.431243  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:09.930955  522370 type.go:168] "Request Body" body=""
	I1206 10:31:09.931035  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:09.931420  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:10.430778  522370 type.go:168] "Request Body" body=""
	I1206 10:31:10.430853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:10.431208  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:10.431264  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:10.930903  522370 type.go:168] "Request Body" body=""
	I1206 10:31:10.930972  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:10.931257  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:11.430753  522370 type.go:168] "Request Body" body=""
	I1206 10:31:11.430831  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:11.431176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:11.930888  522370 type.go:168] "Request Body" body=""
	I1206 10:31:11.930965  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:11.931366  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:12.431043  522370 type.go:168] "Request Body" body=""
	I1206 10:31:12.431118  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:12.431399  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:12.431444  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:12.931357  522370 type.go:168] "Request Body" body=""
	I1206 10:31:12.931433  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:12.931800  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:13.431602  522370 type.go:168] "Request Body" body=""
	I1206 10:31:13.431680  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:13.432016  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:13.930772  522370 type.go:168] "Request Body" body=""
	I1206 10:31:13.930841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:13.931103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.430796  522370 type.go:168] "Request Body" body=""
	I1206 10:31:14.430893  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:14.431217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.930770  522370 type.go:168] "Request Body" body=""
	I1206 10:31:14.930849  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:14.931219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:14.931279  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:15.430782  522370 type.go:168] "Request Body" body=""
	I1206 10:31:15.430850  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:15.431157  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:15.930757  522370 type.go:168] "Request Body" body=""
	I1206 10:31:15.930829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:15.931193  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:16.430753  522370 type.go:168] "Request Body" body=""
	I1206 10:31:16.430830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:16.431177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:16.930737  522370 type.go:168] "Request Body" body=""
	I1206 10:31:16.930808  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:16.931093  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:17.430742  522370 type.go:168] "Request Body" body=""
	I1206 10:31:17.430824  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:17.431217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:17.431272  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:17.931342  522370 type.go:168] "Request Body" body=""
	I1206 10:31:17.931425  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:17.931778  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:18.431537  522370 type.go:168] "Request Body" body=""
	I1206 10:31:18.431605  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:18.431868  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:18.930645  522370 type.go:168] "Request Body" body=""
	I1206 10:31:18.930720  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:18.931093  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:19.430810  522370 type.go:168] "Request Body" body=""
	I1206 10:31:19.430884  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:19.431254  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:19.431307  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:19.930711  522370 type.go:168] "Request Body" body=""
	I1206 10:31:19.930781  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:19.931116  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:20.430790  522370 type.go:168] "Request Body" body=""
	I1206 10:31:20.430893  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:20.431290  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:20.931043  522370 type.go:168] "Request Body" body=""
	I1206 10:31:20.931148  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:20.931503  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:21.431268  522370 type.go:168] "Request Body" body=""
	I1206 10:31:21.431356  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:21.431682  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:21.431723  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:21.931490  522370 type.go:168] "Request Body" body=""
	I1206 10:31:21.931570  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:21.931895  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:22.431704  522370 type.go:168] "Request Body" body=""
	I1206 10:31:22.431783  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:22.432137  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:22.930934  522370 type.go:168] "Request Body" body=""
	I1206 10:31:22.931013  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:22.931330  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:23.430728  522370 type.go:168] "Request Body" body=""
	I1206 10:31:23.430800  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:23.431163  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:23.930907  522370 type.go:168] "Request Body" body=""
	I1206 10:31:23.931011  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:23.931347  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:23.931408  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:24.430723  522370 type.go:168] "Request Body" body=""
	I1206 10:31:24.430793  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:24.431100  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:24.930781  522370 type.go:168] "Request Body" body=""
	I1206 10:31:24.930881  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:24.931205  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:25.430719  522370 type.go:168] "Request Body" body=""
	I1206 10:31:25.430793  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:25.431146  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:25.930743  522370 type.go:168] "Request Body" body=""
	I1206 10:31:25.930825  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:25.931098  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:26.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:31:26.430853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:26.431230  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:26.431285  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:26.930800  522370 type.go:168] "Request Body" body=""
	I1206 10:31:26.930898  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:26.931198  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:27.431688  522370 type.go:168] "Request Body" body=""
	I1206 10:31:27.431783  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:27.432074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:27.931195  522370 type.go:168] "Request Body" body=""
	I1206 10:31:27.931291  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:27.931692  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:28.431526  522370 type.go:168] "Request Body" body=""
	I1206 10:31:28.431657  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:28.432017  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:28.432087  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:28.930685  522370 type.go:168] "Request Body" body=""
	I1206 10:31:28.930798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:28.931176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:29.430715  522370 type.go:168] "Request Body" body=""
	I1206 10:31:29.430787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:29.431113  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:29.930720  522370 type.go:168] "Request Body" body=""
	I1206 10:31:29.930795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:29.931147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:30.430735  522370 type.go:168] "Request Body" body=""
	I1206 10:31:30.430809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:30.431203  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:30.930763  522370 type.go:168] "Request Body" body=""
	I1206 10:31:30.930838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:30.931220  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:30.931276  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:31.430923  522370 type.go:168] "Request Body" body=""
	I1206 10:31:31.430999  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:31.431356  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:31.931034  522370 type.go:168] "Request Body" body=""
	I1206 10:31:31.931102  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:31.931394  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:32.430894  522370 type.go:168] "Request Body" body=""
	I1206 10:31:32.430974  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:32.431350  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:32.931206  522370 type.go:168] "Request Body" body=""
	I1206 10:31:32.931296  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:32.931626  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:32.931683  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:33.431202  522370 type.go:168] "Request Body" body=""
	I1206 10:31:33.431271  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:33.431607  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:33.931401  522370 type.go:168] "Request Body" body=""
	I1206 10:31:33.931476  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:33.931817  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:34.431625  522370 type.go:168] "Request Body" body=""
	I1206 10:31:34.431714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:34.432035  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:34.931669  522370 type.go:168] "Request Body" body=""
	I1206 10:31:34.931742  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:34.932009  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:34.932053  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:35.430771  522370 type.go:168] "Request Body" body=""
	I1206 10:31:35.430852  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:35.431237  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:35.930935  522370 type.go:168] "Request Body" body=""
	I1206 10:31:35.931012  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:35.931347  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:36.430721  522370 type.go:168] "Request Body" body=""
	I1206 10:31:36.430797  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:36.431104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:36.930741  522370 type.go:168] "Request Body" body=""
	I1206 10:31:36.930820  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:36.931208  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:37.430713  522370 type.go:168] "Request Body" body=""
	I1206 10:31:37.430790  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:37.431167  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:37.431222  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:37.931252  522370 type.go:168] "Request Body" body=""
	I1206 10:31:37.931330  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:37.931655  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:38.431472  522370 type.go:168] "Request Body" body=""
	I1206 10:31:38.431546  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:38.431863  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:38.930659  522370 type.go:168] "Request Body" body=""
	I1206 10:31:38.930734  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:38.931062  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:39.430764  522370 type.go:168] "Request Body" body=""
	I1206 10:31:39.430838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:39.431171  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:39.930872  522370 type.go:168] "Request Body" body=""
	I1206 10:31:39.931015  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:39.931393  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:39.931453  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:40.431186  522370 type.go:168] "Request Body" body=""
	I1206 10:31:40.431263  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:40.431606  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:40.931379  522370 type.go:168] "Request Body" body=""
	I1206 10:31:40.931446  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:40.931701  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:41.431485  522370 type.go:168] "Request Body" body=""
	I1206 10:31:41.431564  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:41.431887  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:41.930643  522370 type.go:168] "Request Body" body=""
	I1206 10:31:41.930718  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:41.931057  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:42.430753  522370 type.go:168] "Request Body" body=""
	I1206 10:31:42.430823  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:42.431171  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:42.431219  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:42.931185  522370 type.go:168] "Request Body" body=""
	I1206 10:31:42.931265  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:42.931600  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:43.431298  522370 type.go:168] "Request Body" body=""
	I1206 10:31:43.431370  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:43.431690  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:43.931472  522370 type.go:168] "Request Body" body=""
	I1206 10:31:43.931550  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:43.931859  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:44.431577  522370 type.go:168] "Request Body" body=""
	I1206 10:31:44.431700  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:44.432084  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:44.432138  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:44.930770  522370 type.go:168] "Request Body" body=""
	I1206 10:31:44.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:44.931206  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:45.430733  522370 type.go:168] "Request Body" body=""
	I1206 10:31:45.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:45.431161  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:45.930853  522370 type.go:168] "Request Body" body=""
	I1206 10:31:45.930932  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:45.931318  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:46.430766  522370 type.go:168] "Request Body" body=""
	I1206 10:31:46.430845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:46.431204  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:46.930747  522370 type.go:168] "Request Body" body=""
	I1206 10:31:46.930820  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:46.931099  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:46.931170  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:47.430770  522370 type.go:168] "Request Body" body=""
	I1206 10:31:47.430858  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:47.431194  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:47.931329  522370 type.go:168] "Request Body" body=""
	I1206 10:31:47.931412  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:47.931751  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:48.431557  522370 type.go:168] "Request Body" body=""
	I1206 10:31:48.431630  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:48.431921  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:48.930683  522370 type.go:168] "Request Body" body=""
	I1206 10:31:48.930756  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:48.931083  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:49.430810  522370 type.go:168] "Request Body" body=""
	I1206 10:31:49.430898  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:49.431254  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:49.431313  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:49.930720  522370 type.go:168] "Request Body" body=""
	I1206 10:31:49.930793  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:49.931110  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:50.430770  522370 type.go:168] "Request Body" body=""
	I1206 10:31:50.430874  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:50.431234  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:50.931041  522370 type.go:168] "Request Body" body=""
	I1206 10:31:50.931153  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:50.931493  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:51.431234  522370 type.go:168] "Request Body" body=""
	I1206 10:31:51.431312  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:51.431631  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:51.431691  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:51.931489  522370 type.go:168] "Request Body" body=""
	I1206 10:31:51.931580  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:51.931981  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:52.430704  522370 type.go:168] "Request Body" body=""
	I1206 10:31:52.430806  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:52.431144  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:52.930913  522370 type.go:168] "Request Body" body=""
	I1206 10:31:52.930987  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:52.931309  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:53.430741  522370 type.go:168] "Request Body" body=""
	I1206 10:31:53.430813  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:53.431186  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:53.930898  522370 type.go:168] "Request Body" body=""
	I1206 10:31:53.930988  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:53.931350  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:53.931408  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:54.431065  522370 type.go:168] "Request Body" body=""
	I1206 10:31:54.431152  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:54.431403  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:54.931103  522370 type.go:168] "Request Body" body=""
	I1206 10:31:54.931201  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:54.931542  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.431350  522370 type.go:168] "Request Body" body=""
	I1206 10:31:55.431428  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:55.431748  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.931464  522370 type.go:168] "Request Body" body=""
	I1206 10:31:55.931536  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:55.931792  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:55.931832  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:56.431629  522370 type.go:168] "Request Body" body=""
	I1206 10:31:56.431704  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:56.432065  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:56.930782  522370 type.go:168] "Request Body" body=""
	I1206 10:31:56.930863  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:56.931219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:57.430905  522370 type.go:168] "Request Body" body=""
	I1206 10:31:57.430978  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:57.431276  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:57.931572  522370 type.go:168] "Request Body" body=""
	I1206 10:31:57.931656  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:57.931998  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:57.932052  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:58.430762  522370 type.go:168] "Request Body" body=""
	I1206 10:31:58.430841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:58.431216  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:58.930737  522370 type.go:168] "Request Body" body=""
	I1206 10:31:58.930807  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:58.931055  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:59.430703  522370 type.go:168] "Request Body" body=""
	I1206 10:31:59.430788  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:59.431185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:59.930748  522370 type.go:168] "Request Body" body=""
	I1206 10:31:59.930832  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:59.931193  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:00.430923  522370 type.go:168] "Request Body" body=""
	I1206 10:32:00.431018  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:00.431383  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:00.431435  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:00.930749  522370 type.go:168] "Request Body" body=""
	I1206 10:32:00.930823  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:00.931167  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:01.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:32:01.430987  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:01.431290  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:01.930735  522370 type.go:168] "Request Body" body=""
	I1206 10:32:01.930846  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:01.931177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:02.430793  522370 type.go:168] "Request Body" body=""
	I1206 10:32:02.430870  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:02.431209  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:02.931198  522370 type.go:168] "Request Body" body=""
	I1206 10:32:02.931274  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:02.931612  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:02.931666  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:03.431269  522370 type.go:168] "Request Body" body=""
	I1206 10:32:03.431341  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:03.431598  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:03.931409  522370 type.go:168] "Request Body" body=""
	I1206 10:32:03.931493  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:03.931843  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:04.431512  522370 type.go:168] "Request Body" body=""
	I1206 10:32:04.431588  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:04.431937  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:04.930649  522370 type.go:168] "Request Body" body=""
	I1206 10:32:04.930727  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:04.930996  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:05.430715  522370 type.go:168] "Request Body" body=""
	I1206 10:32:05.430789  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:05.431147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:05.431201  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:05.930878  522370 type.go:168] "Request Body" body=""
	I1206 10:32:05.930961  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:05.931320  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:06.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:32:06.430798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:06.431112  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:06.930766  522370 type.go:168] "Request Body" body=""
	I1206 10:32:06.930839  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:06.931201  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:07.430760  522370 type.go:168] "Request Body" body=""
	I1206 10:32:07.430842  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:07.431197  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:07.431255  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:07.931421  522370 type.go:168] "Request Body" body=""
	I1206 10:32:07.931493  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:07.931819  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:08.431684  522370 type.go:168] "Request Body" body=""
	I1206 10:32:08.431770  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:08.432111  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:08.930828  522370 type.go:168] "Request Body" body=""
	I1206 10:32:08.930926  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:08.931327  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:09.430732  522370 type.go:168] "Request Body" body=""
	I1206 10:32:09.430804  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:09.431070  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:09.930755  522370 type.go:168] "Request Body" body=""
	I1206 10:32:09.930836  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:09.931203  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:09.931266  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:10.430768  522370 type.go:168] "Request Body" body=""
	I1206 10:32:10.430879  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:10.431217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:10.930889  522370 type.go:168] "Request Body" body=""
	I1206 10:32:10.930960  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:10.931259  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:11.430792  522370 type.go:168] "Request Body" body=""
	I1206 10:32:11.430872  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:11.431253  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:11.930964  522370 type.go:168] "Request Body" body=""
	I1206 10:32:11.931039  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:11.931369  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:11.931419  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:12.430849  522370 type.go:168] "Request Body" body=""
	I1206 10:32:12.430927  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:12.431323  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:12.931326  522370 type.go:168] "Request Body" body=""
	I1206 10:32:12.931399  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:12.931728  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:13.431494  522370 type.go:168] "Request Body" body=""
	I1206 10:32:13.431575  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:13.431906  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:13.931683  522370 type.go:168] "Request Body" body=""
	I1206 10:32:13.931761  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:13.932130  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:13.932175  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:14.430772  522370 type.go:168] "Request Body" body=""
	I1206 10:32:14.430846  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:14.431201  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:14.930793  522370 type.go:168] "Request Body" body=""
	I1206 10:32:14.930874  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:14.931260  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:15.430763  522370 type.go:168] "Request Body" body=""
	I1206 10:32:15.430896  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:15.431300  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:15.930804  522370 type.go:168] "Request Body" body=""
	I1206 10:32:15.930877  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:15.931264  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:16.431005  522370 type.go:168] "Request Body" body=""
	I1206 10:32:16.431079  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:16.431470  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:16.431521  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:16.931228  522370 type.go:168] "Request Body" body=""
	I1206 10:32:16.931295  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:16.931553  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:17.431420  522370 type.go:168] "Request Body" body=""
	I1206 10:32:17.431494  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:17.431814  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:17.930648  522370 type.go:168] "Request Body" body=""
	I1206 10:32:17.930727  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:17.931063  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:18.430755  522370 type.go:168] "Request Body" body=""
	I1206 10:32:18.430879  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:18.431264  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:18.930777  522370 type.go:168] "Request Body" body=""
	I1206 10:32:18.930861  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:18.931245  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:18.931300  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:19.430824  522370 type.go:168] "Request Body" body=""
	I1206 10:32:19.430907  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:19.431295  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:19.930976  522370 type.go:168] "Request Body" body=""
	I1206 10:32:19.931045  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:19.931324  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:20.431030  522370 type.go:168] "Request Body" body=""
	I1206 10:32:20.431116  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:20.431515  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:20.931305  522370 type.go:168] "Request Body" body=""
	I1206 10:32:20.931378  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:20.931713  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:20.931766  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:21.431535  522370 type.go:168] "Request Body" body=""
	I1206 10:32:21.431650  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:21.431909  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:21.931668  522370 type.go:168] "Request Body" body=""
	I1206 10:32:21.931751  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:21.932103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:22.430721  522370 type.go:168] "Request Body" body=""
	I1206 10:32:22.430809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:22.431161  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:22.931142  522370 type.go:168] "Request Body" body=""
	I1206 10:32:22.931211  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:22.931472  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:23.431308  522370 type.go:168] "Request Body" body=""
	I1206 10:32:23.431380  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:23.431717  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:23.431770  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:23.931606  522370 type.go:168] "Request Body" body=""
	I1206 10:32:23.931684  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:23.932028  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:24.430722  522370 type.go:168] "Request Body" body=""
	I1206 10:32:24.430852  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:24.431235  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:24.930776  522370 type.go:168] "Request Body" body=""
	I1206 10:32:24.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:24.931200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:25.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:32:25.430855  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:25.431194  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:25.930892  522370 type.go:168] "Request Body" body=""
	I1206 10:32:25.930959  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:25.931238  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:25.931278  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:26.430787  522370 type.go:168] "Request Body" body=""
	I1206 10:32:26.430873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:26.431231  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:26.930954  522370 type.go:168] "Request Body" body=""
	I1206 10:32:26.931033  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:26.931398  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:27.431111  522370 type.go:168] "Request Body" body=""
	I1206 10:32:27.431201  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:27.431504  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:27.931658  522370 type.go:168] "Request Body" body=""
	I1206 10:32:27.931732  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:27.932069  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:27.932132  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:28.430761  522370 type.go:168] "Request Body" body=""
	I1206 10:32:28.430838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:28.431176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:28.930711  522370 type.go:168] "Request Body" body=""
	I1206 10:32:28.930783  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:28.931095  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:29.430760  522370 type.go:168] "Request Body" body=""
	I1206 10:32:29.430841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:29.431191  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:29.930785  522370 type.go:168] "Request Body" body=""
	I1206 10:32:29.930863  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:29.931232  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:30.430912  522370 type.go:168] "Request Body" body=""
	I1206 10:32:30.430988  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:30.431300  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:30.431356  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:30.930756  522370 type.go:168] "Request Body" body=""
	I1206 10:32:30.930830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:30.931179  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:31.430752  522370 type.go:168] "Request Body" body=""
	I1206 10:32:31.430836  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:31.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:31.930917  522370 type.go:168] "Request Body" body=""
	I1206 10:32:31.930986  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:31.931271  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:32.430794  522370 type.go:168] "Request Body" body=""
	I1206 10:32:32.430881  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:32.431249  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:32.931263  522370 type.go:168] "Request Body" body=""
	I1206 10:32:32.931386  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:32.931723  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:32.931782  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:33.431496  522370 type.go:168] "Request Body" body=""
	I1206 10:32:33.431581  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:33.431932  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:33.931657  522370 type.go:168] "Request Body" body=""
	I1206 10:32:33.931736  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:33.932091  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:34.430713  522370 type.go:168] "Request Body" body=""
	I1206 10:32:34.430790  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:34.431152  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:34.930699  522370 type.go:168] "Request Body" body=""
	I1206 10:32:34.930768  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:34.931073  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:35.430758  522370 type.go:168] "Request Body" body=""
	I1206 10:32:35.430837  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:35.431193  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:35.431247  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:35.930738  522370 type.go:168] "Request Body" body=""
	I1206 10:32:35.930816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:35.931165  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:36.430720  522370 type.go:168] "Request Body" body=""
	I1206 10:32:36.430791  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:36.431113  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:36.930734  522370 type.go:168] "Request Body" body=""
	I1206 10:32:36.930816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:36.931108  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:37.430776  522370 type.go:168] "Request Body" body=""
	I1206 10:32:37.430857  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:37.431190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:37.931387  522370 type.go:168] "Request Body" body=""
	I1206 10:32:37.931455  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:37.931795  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:37.931855  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:38.431623  522370 type.go:168] "Request Body" body=""
	I1206 10:32:38.431705  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:38.432052  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:38.930772  522370 type.go:168] "Request Body" body=""
	I1206 10:32:38.930850  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:38.931196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:39.430728  522370 type.go:168] "Request Body" body=""
	I1206 10:32:39.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:39.431143  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:39.930742  522370 type.go:168] "Request Body" body=""
	I1206 10:32:39.930822  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:39.931187  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:40.430754  522370 type.go:168] "Request Body" body=""
	I1206 10:32:40.430831  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:40.431262  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:40.431317  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:40.930972  522370 type.go:168] "Request Body" body=""
	I1206 10:32:40.931048  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:40.931346  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:41.430760  522370 type.go:168] "Request Body" body=""
	I1206 10:32:41.430833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:41.431192  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:41.930757  522370 type.go:168] "Request Body" body=""
	I1206 10:32:41.930829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:41.931180  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:42.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:32:42.430816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:42.431140  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:42.931171  522370 type.go:168] "Request Body" body=""
	I1206 10:32:42.931246  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:42.931610  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:42.931666  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:43.431315  522370 type.go:168] "Request Body" body=""
	I1206 10:32:43.431391  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:43.431734  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:43.931465  522370 type.go:168] "Request Body" body=""
	I1206 10:32:43.931536  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:43.931803  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:44.431545  522370 type.go:168] "Request Body" body=""
	I1206 10:32:44.431622  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:44.431960  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:44.931639  522370 type.go:168] "Request Body" body=""
	I1206 10:32:44.931734  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:44.932055  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:44.932114  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:45.430772  522370 type.go:168] "Request Body" body=""
	I1206 10:32:45.430845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:45.431116  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:45.930776  522370 type.go:168] "Request Body" body=""
	I1206 10:32:45.930868  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:45.931291  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:46.430766  522370 type.go:168] "Request Body" body=""
	I1206 10:32:46.430841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:46.431191  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:46.930863  522370 type.go:168] "Request Body" body=""
	I1206 10:32:46.930930  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:46.931212  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:47.430796  522370 type.go:168] "Request Body" body=""
	I1206 10:32:47.430887  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:47.431295  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:47.431359  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:47.931474  522370 type.go:168] "Request Body" body=""
	I1206 10:32:47.931560  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:47.931907  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:48.431677  522370 type.go:168] "Request Body" body=""
	I1206 10:32:48.431748  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:48.432085  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:48.930788  522370 type.go:168] "Request Body" body=""
	I1206 10:32:48.930871  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:48.931291  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:49.430869  522370 type.go:168] "Request Body" body=""
	I1206 10:32:49.430950  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:49.431291  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:49.930725  522370 type.go:168] "Request Body" body=""
	I1206 10:32:49.930794  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:49.931082  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:49.931153  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:50.430853  522370 type.go:168] "Request Body" body=""
	I1206 10:32:50.430949  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:50.431284  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:50.930721  522370 type.go:168] "Request Body" body=""
	I1206 10:32:50.930802  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:50.931146  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:51.430715  522370 type.go:168] "Request Body" body=""
	I1206 10:32:51.430795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:51.431104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:51.930818  522370 type.go:168] "Request Body" body=""
	I1206 10:32:51.930895  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:51.931285  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:51.931368  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:52.431078  522370 type.go:168] "Request Body" body=""
	I1206 10:32:52.431180  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:52.431482  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:52.931409  522370 type.go:168] "Request Body" body=""
	I1206 10:32:52.931482  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:52.931752  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:53.431547  522370 type.go:168] "Request Body" body=""
	I1206 10:32:53.431624  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:53.431945  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:53.930683  522370 type.go:168] "Request Body" body=""
	I1206 10:32:53.930759  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:53.931085  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:54.430729  522370 type.go:168] "Request Body" body=""
	I1206 10:32:54.430803  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:54.431094  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:54.431169  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:54.930721  522370 type.go:168] "Request Body" body=""
	I1206 10:32:54.930796  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:54.931156  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:55.430745  522370 type.go:168] "Request Body" body=""
	I1206 10:32:55.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:55.431164  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:55.930849  522370 type.go:168] "Request Body" body=""
	I1206 10:32:55.930915  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:55.931210  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:56.430891  522370 type.go:168] "Request Body" body=""
	I1206 10:32:56.430970  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:56.431338  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:56.431397  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:56.930910  522370 type.go:168] "Request Body" body=""
	I1206 10:32:56.930994  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:56.931313  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:57.430984  522370 type.go:168] "Request Body" body=""
	I1206 10:32:57.431057  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:57.431352  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:57.931626  522370 type.go:168] "Request Body" body=""
	I1206 10:32:57.931699  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:57.932050  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:58.430670  522370 type.go:168] "Request Body" body=""
	I1206 10:32:58.430747  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:58.431102  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:58.930730  522370 type.go:168] "Request Body" body=""
	I1206 10:32:58.930798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:58.931062  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:58.931101  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:59.430796  522370 type.go:168] "Request Body" body=""
	I1206 10:32:59.430871  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:59.431207  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:59.930920  522370 type.go:168] "Request Body" body=""
	I1206 10:32:59.930996  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:59.931373  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:00.431073  522370 type.go:168] "Request Body" body=""
	I1206 10:33:00.431174  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:00.431454  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:00.931159  522370 type.go:168] "Request Body" body=""
	I1206 10:33:00.931233  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:00.931593  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:00.931646  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:01.431429  522370 type.go:168] "Request Body" body=""
	I1206 10:33:01.431506  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:01.431854  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:01.931651  522370 type.go:168] "Request Body" body=""
	I1206 10:33:01.931722  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:01.932003  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:02.430670  522370 type.go:168] "Request Body" body=""
	I1206 10:33:02.430745  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:02.431108  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:02.930915  522370 type.go:168] "Request Body" body=""
	I1206 10:33:02.930990  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:02.931336  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:03.431009  522370 type.go:168] "Request Body" body=""
	I1206 10:33:03.431081  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:03.431417  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:03.431470  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:03.930755  522370 type.go:168] "Request Body" body=""
	I1206 10:33:03.930829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:03.931200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:04.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:33:04.430822  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:04.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:04.930891  522370 type.go:168] "Request Body" body=""
	I1206 10:33:04.930967  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:04.931354  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:05.430778  522370 type.go:168] "Request Body" body=""
	I1206 10:33:05.430860  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:05.431250  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:05.930755  522370 type.go:168] "Request Body" body=""
	I1206 10:33:05.930835  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:05.931189  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:05.931249  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:06.430896  522370 type.go:168] "Request Body" body=""
	I1206 10:33:06.430973  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:06.431278  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:06.930733  522370 type.go:168] "Request Body" body=""
	I1206 10:33:06.930807  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:06.931165  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:07.430744  522370 type.go:168] "Request Body" body=""
	I1206 10:33:07.430825  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:07.431177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:07.931223  522370 type.go:168] "Request Body" body=""
	I1206 10:33:07.931292  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:07.931564  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:07.931604  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:08.431432  522370 type.go:168] "Request Body" body=""
	I1206 10:33:08.431521  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:08.431859  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:08.931644  522370 type.go:168] "Request Body" body=""
	I1206 10:33:08.931724  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:08.932093  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:09.430763  522370 type.go:168] "Request Body" body=""
	I1206 10:33:09.430862  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:09.431255  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:09.930767  522370 type.go:168] "Request Body" body=""
	I1206 10:33:09.930849  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:09.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:10.430945  522370 type.go:168] "Request Body" body=""
	I1206 10:33:10.431022  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:10.431384  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:10.431441  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:10.931100  522370 type.go:168] "Request Body" body=""
	I1206 10:33:10.931186  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:10.931443  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:11.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:33:11.430818  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:11.431167  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:11.930886  522370 type.go:168] "Request Body" body=""
	I1206 10:33:11.930967  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:11.931341  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:12.431022  522370 type.go:168] "Request Body" body=""
	I1206 10:33:12.431093  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:12.431430  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:12.431487  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:12.931422  522370 type.go:168] "Request Body" body=""
	I1206 10:33:12.931498  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:12.931813  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:13.431634  522370 type.go:168] "Request Body" body=""
	I1206 10:33:13.431707  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:13.432041  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:13.930727  522370 type.go:168] "Request Body" body=""
	I1206 10:33:13.930806  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:13.931116  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:14.430764  522370 type.go:168] "Request Body" body=""
	I1206 10:33:14.430843  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:14.431197  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:14.930912  522370 type.go:168] "Request Body" body=""
	I1206 10:33:14.930993  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:14.931381  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:14.931437  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:15.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:33:15.430795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:15.431103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:15.930758  522370 type.go:168] "Request Body" body=""
	I1206 10:33:15.930830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:15.931180  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:16.430880  522370 type.go:168] "Request Body" body=""
	I1206 10:33:16.430966  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:16.431327  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:16.930724  522370 type.go:168] "Request Body" body=""
	I1206 10:33:16.930789  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:16.931103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:17.430923  522370 type.go:168] "Request Body" body=""
	I1206 10:33:17.430996  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:17.431378  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:17.431433  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:17.931311  522370 type.go:168] "Request Body" body=""
	I1206 10:33:17.931390  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:17.931703  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:18.431499  522370 type.go:168] "Request Body" body=""
	I1206 10:33:18.431573  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:18.431859  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:18.931659  522370 type.go:168] "Request Body" body=""
	I1206 10:33:18.931728  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:18.932101  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:19.430669  522370 type.go:168] "Request Body" body=""
	I1206 10:33:19.430749  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:19.431091  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:19.930819  522370 type.go:168] "Request Body" body=""
	I1206 10:33:19.930896  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:19.931201  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:19.931264  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:20.430727  522370 type.go:168] "Request Body" body=""
	I1206 10:33:20.430804  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:20.431145  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:20.930747  522370 type.go:168] "Request Body" body=""
	I1206 10:33:20.930830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:20.931225  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:21.430895  522370 type.go:168] "Request Body" body=""
	I1206 10:33:21.430968  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:21.431276  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:21.930735  522370 type.go:168] "Request Body" body=""
	I1206 10:33:21.930814  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:21.931153  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:22.430742  522370 type.go:168] "Request Body" body=""
	I1206 10:33:22.430815  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:22.431176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:22.431236  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:22.930959  522370 type.go:168] "Request Body" body=""
	I1206 10:33:22.931032  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:22.931315  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:23.430982  522370 type.go:168] "Request Body" body=""
	I1206 10:33:23.431057  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:23.431412  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:23.931141  522370 type.go:168] "Request Body" body=""
	I1206 10:33:23.931222  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:23.931520  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:24.431230  522370 type.go:168] "Request Body" body=""
	I1206 10:33:24.431303  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:24.431559  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:24.431598  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:24.931419  522370 type.go:168] "Request Body" body=""
	I1206 10:33:24.931497  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:24.931798  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:25.431590  522370 type.go:168] "Request Body" body=""
	I1206 10:33:25.431664  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:25.432003  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:25.930717  522370 type.go:168] "Request Body" body=""
	I1206 10:33:25.930787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:25.931105  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:26.430730  522370 type.go:168] "Request Body" body=""
	I1206 10:33:26.430803  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:26.431170  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:26.930777  522370 type.go:168] "Request Body" body=""
	I1206 10:33:26.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:26.931184  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:26.931237  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:27.430726  522370 type.go:168] "Request Body" body=""
	I1206 10:33:27.430818  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:27.431145  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:27.931181  522370 type.go:168] "Request Body" body=""
	I1206 10:33:27.931266  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:27.931566  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:28.431438  522370 type.go:168] "Request Body" body=""
	I1206 10:33:28.431510  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:28.431869  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:28.931537  522370 type.go:168] "Request Body" body=""
	I1206 10:33:28.931618  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:28.931903  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:28.931960  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:29.430674  522370 type.go:168] "Request Body" body=""
	I1206 10:33:29.430755  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:29.431137  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:29.930914  522370 type.go:168] "Request Body" body=""
	I1206 10:33:29.930990  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:29.931351  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:30.431026  522370 type.go:168] "Request Body" body=""
	I1206 10:33:30.431102  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:30.431376  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:30.930781  522370 type.go:168] "Request Body" body=""
	I1206 10:33:30.930873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:30.931192  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:31.430878  522370 type.go:168] "Request Body" body=""
	I1206 10:33:31.430956  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:31.431307  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:31.431363  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:31.930818  522370 type.go:168] "Request Body" body=""
	I1206 10:33:31.930894  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:31.931174  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:32.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:33:32.430850  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:32.431192  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:32.931213  522370 type.go:168] "Request Body" body=""
	I1206 10:33:32.931287  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:32.931623  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:33.431374  522370 type.go:168] "Request Body" body=""
	I1206 10:33:33.431441  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:33.431690  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:33.431729  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:33.931533  522370 type.go:168] "Request Body" body=""
	I1206 10:33:33.931612  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:33.931952  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:34.430686  522370 type.go:168] "Request Body" body=""
	I1206 10:33:34.430769  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:34.431100  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:34.930721  522370 type.go:168] "Request Body" body=""
	I1206 10:33:34.930796  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:34.931111  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:35.430773  522370 type.go:168] "Request Body" body=""
	I1206 10:33:35.430854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:35.431209  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:35.930752  522370 type.go:168] "Request Body" body=""
	I1206 10:33:35.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:35.931211  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:35.931270  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:36.430716  522370 type.go:168] "Request Body" body=""
	I1206 10:33:36.430789  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:36.431117  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:36.930838  522370 type.go:168] "Request Body" body=""
	I1206 10:33:36.930915  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:36.931278  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:37.430762  522370 type.go:168] "Request Body" body=""
	I1206 10:33:37.430839  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:37.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:37.931227  522370 type.go:168] "Request Body" body=""
	I1206 10:33:37.931308  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:37.931579  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:37.931629  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:38.431327  522370 type.go:168] "Request Body" body=""
	I1206 10:33:38.431398  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:38.431755  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:38.931430  522370 type.go:168] "Request Body" body=""
	I1206 10:33:38.931512  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:38.931837  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:39.431622  522370 type.go:168] "Request Body" body=""
	I1206 10:33:39.431687  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:39.431948  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:39.930714  522370 type.go:168] "Request Body" body=""
	I1206 10:33:39.930788  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:39.931147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:40.430846  522370 type.go:168] "Request Body" body=""
	I1206 10:33:40.430923  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:40.431265  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:40.431320  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:40.930719  522370 type.go:168] "Request Body" body=""
	I1206 10:33:40.930795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:40.931103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:41.430879  522370 type.go:168] "Request Body" body=""
	I1206 10:33:41.430958  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:41.431368  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:41.931083  522370 type.go:168] "Request Body" body=""
	I1206 10:33:41.931178  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:41.931515  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:42.431226  522370 type.go:168] "Request Body" body=""
	I1206 10:33:42.431297  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:42.431581  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:42.431622  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:42.931519  522370 type.go:168] "Request Body" body=""
	I1206 10:33:42.931593  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:42.931924  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:43.431683  522370 type.go:168] "Request Body" body=""
	I1206 10:33:43.431760  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:43.432078  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:43.930714  522370 type.go:168] "Request Body" body=""
	I1206 10:33:43.930784  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:43.931091  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:44.430727  522370 type.go:168] "Request Body" body=""
	I1206 10:33:44.430805  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:44.431177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:44.930741  522370 type.go:168] "Request Body" body=""
	I1206 10:33:44.930820  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:44.931173  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:44.931227  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:45.430724  522370 type.go:168] "Request Body" body=""
	I1206 10:33:45.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:45.431154  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:45.930742  522370 type.go:168] "Request Body" body=""
	I1206 10:33:45.930816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:45.931177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:46.430876  522370 type.go:168] "Request Body" body=""
	I1206 10:33:46.430959  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:46.431313  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:46.930987  522370 type.go:168] "Request Body" body=""
	I1206 10:33:46.931061  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:46.931413  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:46.931474  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:47.430745  522370 type.go:168] "Request Body" body=""
	I1206 10:33:47.430826  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:47.431205  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:47.931381  522370 type.go:168] "Request Body" body=""
	I1206 10:33:47.931468  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:47.931814  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:48.431456  522370 type.go:168] "Request Body" body=""
	I1206 10:33:48.431530  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:48.431817  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:48.931583  522370 type.go:168] "Request Body" body=""
	I1206 10:33:48.931659  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:48.932002  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:48.932055  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:49.431685  522370 type.go:168] "Request Body" body=""
	I1206 10:33:49.431764  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:49.432103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:49.930780  522370 type.go:168] "Request Body" body=""
	I1206 10:33:49.930855  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:49.931113  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:50.430744  522370 type.go:168] "Request Body" body=""
	I1206 10:33:50.430816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:50.431162  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:50.930749  522370 type.go:168] "Request Body" body=""
	I1206 10:33:50.930827  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:50.931179  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:51.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:33:51.430805  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:51.431143  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:51.431197  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:51.930877  522370 type.go:168] "Request Body" body=""
	I1206 10:33:51.930958  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:51.931307  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:52.431057  522370 type.go:168] "Request Body" body=""
	I1206 10:33:52.431157  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:52.431503  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:52.931288  522370 type.go:168] "Request Body" body=""
	I1206 10:33:52.931355  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:52.931612  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:53.431346  522370 type.go:168] "Request Body" body=""
	I1206 10:33:53.431421  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:53.431742  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:53.431799  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:53.931572  522370 type.go:168] "Request Body" body=""
	I1206 10:33:53.931647  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:53.931997  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:54.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:33:54.430806  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:54.431078  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:54.930776  522370 type.go:168] "Request Body" body=""
	I1206 10:33:54.930849  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:54.931177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:55.430770  522370 type.go:168] "Request Body" body=""
	I1206 10:33:55.430844  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:55.431172  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:55.930729  522370 type.go:168] "Request Body" body=""
	I1206 10:33:55.930801  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:55.931082  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:55.931151  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:56.430895  522370 type.go:168] "Request Body" body=""
	I1206 10:33:56.430967  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:56.431320  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:56.931032  522370 type.go:168] "Request Body" body=""
	I1206 10:33:56.931110  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:56.931459  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:57.430829  522370 type.go:168] "Request Body" body=""
	I1206 10:33:57.430902  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:57.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:57.931275  522370 type.go:168] "Request Body" body=""
	I1206 10:33:57.931349  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:57.931687  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:57.931743  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:58.431516  522370 type.go:168] "Request Body" body=""
	I1206 10:33:58.431614  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:58.431939  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:58.930634  522370 type.go:168] "Request Body" body=""
	I1206 10:33:58.930706  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:58.930955  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:59.430684  522370 type.go:168] "Request Body" body=""
	I1206 10:33:59.430758  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:59.431050  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:59.930637  522370 type.go:168] "Request Body" body=""
	I1206 10:33:59.930735  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:59.931074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:00.430781  522370 type.go:168] "Request Body" body=""
	I1206 10:34:00.430869  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:00.431217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:00.431315  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:00.930729  522370 type.go:168] "Request Body" body=""
	I1206 10:34:00.930820  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:00.931148  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:01.430848  522370 type.go:168] "Request Body" body=""
	I1206 10:34:01.430922  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:01.431286  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:01.930713  522370 type.go:168] "Request Body" body=""
	I1206 10:34:01.930791  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:01.931110  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:02.430761  522370 type.go:168] "Request Body" body=""
	I1206 10:34:02.430835  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:02.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:02.931031  522370 type.go:168] "Request Body" body=""
	I1206 10:34:02.931109  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:02.931453  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:02.931514  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:03.430966  522370 type.go:168] "Request Body" body=""
	I1206 10:34:03.431062  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:03.431375  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:03.930731  522370 type.go:168] "Request Body" body=""
	I1206 10:34:03.930814  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:03.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:04.430751  522370 type.go:168] "Request Body" body=""
	I1206 10:34:04.430825  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:04.431168  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:04.930717  522370 type.go:168] "Request Body" body=""
	I1206 10:34:04.930787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:04.931097  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:05.430797  522370 type.go:168] "Request Body" body=""
	I1206 10:34:05.430873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:05.431234  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:05.431295  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:05.930980  522370 type.go:168] "Request Body" body=""
	I1206 10:34:05.931058  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:05.931414  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:06.430713  522370 type.go:168] "Request Body" body=""
	I1206 10:34:06.430787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:06.431089  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:06.930764  522370 type.go:168] "Request Body" body=""
	I1206 10:34:06.930844  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:06.931244  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:07.430820  522370 type.go:168] "Request Body" body=""
	I1206 10:34:07.430894  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:07.431251  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:07.931445  522370 type.go:168] "Request Body" body=""
	I1206 10:34:07.931516  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:07.931771  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:07.931812  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:08.431524  522370 type.go:168] "Request Body" body=""
	I1206 10:34:08.431601  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:08.431921  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:08.930678  522370 type.go:168] "Request Body" body=""
	I1206 10:34:08.930767  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:08.931174  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:09.430817  522370 type.go:168] "Request Body" body=""
	I1206 10:34:09.430892  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:09.431194  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:09.930925  522370 type.go:168] "Request Body" body=""
	I1206 10:34:09.931018  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:09.931371  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:10.430770  522370 type.go:168] "Request Body" body=""
	I1206 10:34:10.430853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:10.431202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:10.431255  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:10.930764  522370 type.go:168] "Request Body" body=""
	I1206 10:34:10.930831  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:10.931090  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:11.430809  522370 type.go:168] "Request Body" body=""
	I1206 10:34:11.430882  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:11.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:11.930776  522370 type.go:168] "Request Body" body=""
	I1206 10:34:11.930851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:11.931212  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:12.430751  522370 type.go:168] "Request Body" body=""
	I1206 10:34:12.430822  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:12.431076  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:12.930963  522370 type.go:168] "Request Body" body=""
	I1206 10:34:12.931034  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:12.931391  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:12.931447  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:13.430984  522370 type.go:168] "Request Body" body=""
	I1206 10:34:13.431059  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:13.431405  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:13.930730  522370 type.go:168] "Request Body" body=""
	I1206 10:34:13.930807  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:13.931082  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:14.430699  522370 type.go:168] "Request Body" body=""
	I1206 10:34:14.430785  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:14.431147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:14.930773  522370 type.go:168] "Request Body" body=""
	I1206 10:34:14.930855  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:14.931210  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:15.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:34:15.430808  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:15.431058  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:15.431101  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:15.930737  522370 type.go:168] "Request Body" body=""
	I1206 10:34:15.930809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:15.931163  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:16.430877  522370 type.go:168] "Request Body" body=""
	I1206 10:34:16.430949  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:16.431309  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:16.930711  522370 type.go:168] "Request Body" body=""
	I1206 10:34:16.930788  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:16.931088  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:17.430798  522370 type.go:168] "Request Body" body=""
	I1206 10:34:17.430879  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:17.431230  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:17.431288  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:17.931511  522370 type.go:168] "Request Body" body=""
	I1206 10:34:17.931612  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:17.931976  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:18.431590  522370 type.go:168] "Request Body" body=""
	I1206 10:34:18.431659  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:18.432004  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:18.930728  522370 type.go:168] "Request Body" body=""
	I1206 10:34:18.930808  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:18.931147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:19.430863  522370 type.go:168] "Request Body" body=""
	I1206 10:34:19.430939  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:19.431293  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:19.431346  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:19.930992  522370 type.go:168] "Request Body" body=""
	I1206 10:34:19.931064  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:19.931410  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:20.430778  522370 type.go:168] "Request Body" body=""
	I1206 10:34:20.430854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:20.431219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:20.931558  522370 type.go:168] "Request Body" body=""
	I1206 10:34:20.931639  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:20.931987  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:21.430701  522370 type.go:168] "Request Body" body=""
	I1206 10:34:21.430786  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:21.431147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:21.930753  522370 type.go:168] "Request Body" body=""
	I1206 10:34:21.930827  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:21.931172  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:21.931232  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:22.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:34:22.430999  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:22.431346  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:22.931270  522370 type.go:168] "Request Body" body=""
	I1206 10:34:22.931368  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:22.931817  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:23.431585  522370 type.go:168] "Request Body" body=""
	I1206 10:34:23.431659  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:23.431973  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:23.930685  522370 type.go:168] "Request Body" body=""
	I1206 10:34:23.930759  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:23.931087  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:24.430797  522370 type.go:168] "Request Body" body=""
	I1206 10:34:24.430872  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:24.431117  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:24.431176  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:24.930806  522370 type.go:168] "Request Body" body=""
	I1206 10:34:24.930882  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:24.931202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:25.430788  522370 type.go:168] "Request Body" body=""
	I1206 10:34:25.430861  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:25.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:25.930868  522370 type.go:168] "Request Body" body=""
	I1206 10:34:25.930939  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:25.931218  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:26.430758  522370 type.go:168] "Request Body" body=""
	I1206 10:34:26.430834  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:26.431213  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:26.431274  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:26.930768  522370 type.go:168] "Request Body" body=""
	I1206 10:34:26.930845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:26.931192  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:27.430884  522370 type.go:168] "Request Body" body=""
	I1206 10:34:27.430960  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:27.431252  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:27.931325  522370 type.go:168] "Request Body" body=""
	I1206 10:34:27.931408  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:27.931744  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:28.431435  522370 type.go:168] "Request Body" body=""
	I1206 10:34:28.431523  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:28.431850  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:28.431909  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:28.931644  522370 type.go:168] "Request Body" body=""
	I1206 10:34:28.931714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:28.931970  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:29.430721  522370 type.go:168] "Request Body" body=""
	I1206 10:34:29.430803  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:29.431141  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:29.930784  522370 type.go:168] "Request Body" body=""
	I1206 10:34:29.930859  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:29.931176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:30.430844  522370 type.go:168] "Request Body" body=""
	I1206 10:34:30.430919  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:30.431210  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:30.930768  522370 type.go:168] "Request Body" body=""
	I1206 10:34:30.930851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:30.931235  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:30.931295  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:31.430810  522370 type.go:168] "Request Body" body=""
	I1206 10:34:31.430887  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:31.431198  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:31.930745  522370 type.go:168] "Request Body" body=""
	I1206 10:34:31.930813  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:31.931077  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:32.430753  522370 type.go:168] "Request Body" body=""
	I1206 10:34:32.430840  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:32.431195  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:32.931075  522370 type.go:168] "Request Body" body=""
	I1206 10:34:32.931167  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:32.931468  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:32.931518  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:33.431100  522370 type.go:168] "Request Body" body=""
	I1206 10:34:33.431184  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:33.431485  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:33.930775  522370 type.go:168] "Request Body" body=""
	I1206 10:34:33.930855  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:33.931221  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:34.430796  522370 type.go:168] "Request Body" body=""
	I1206 10:34:34.430877  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:34.431210  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:34.930739  522370 type.go:168] "Request Body" body=""
	I1206 10:34:34.930818  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:34.931162  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:35.430773  522370 type.go:168] "Request Body" body=""
	I1206 10:34:35.430856  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:35.431214  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:35.431268  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:35.930868  522370 type.go:168] "Request Body" body=""
	I1206 10:34:35.930944  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:35.931315  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:36.430720  522370 type.go:168] "Request Body" body=""
	I1206 10:34:36.430791  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:36.431040  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:36.930739  522370 type.go:168] "Request Body" body=""
	I1206 10:34:36.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:36.931195  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:37.430910  522370 type.go:168] "Request Body" body=""
	I1206 10:34:37.430986  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:37.431301  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:37.431348  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:37.931302  522370 type.go:168] "Request Body" body=""
	I1206 10:34:37.931371  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:37.931629  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:38.431530  522370 type.go:168] "Request Body" body=""
	I1206 10:34:38.431619  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:38.431930  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:38.930656  522370 type.go:168] "Request Body" body=""
	I1206 10:34:38.930736  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:38.931104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:39.430791  522370 type.go:168] "Request Body" body=""
	I1206 10:34:39.430869  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:39.431157  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:39.930904  522370 type.go:168] "Request Body" body=""
	I1206 10:34:39.930984  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:39.931350  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:39.931412  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:40.431091  522370 type.go:168] "Request Body" body=""
	I1206 10:34:40.431191  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:40.431534  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:40.931277  522370 type.go:168] "Request Body" body=""
	I1206 10:34:40.931349  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:40.931605  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:41.431406  522370 type.go:168] "Request Body" body=""
	I1206 10:34:41.431517  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:41.431838  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:41.931609  522370 type.go:168] "Request Body" body=""
	I1206 10:34:41.931696  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:41.932047  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:41.932102  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:42.430748  522370 type.go:168] "Request Body" body=""
	I1206 10:34:42.430824  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:42.431103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:42.931215  522370 type.go:168] "Request Body" body=""
	I1206 10:34:42.931317  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:42.931648  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:43.431450  522370 type.go:168] "Request Body" body=""
	I1206 10:34:43.431526  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:43.431858  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:43.931579  522370 type.go:168] "Request Body" body=""
	I1206 10:34:43.931659  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:43.931991  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:44.431656  522370 type.go:168] "Request Body" body=""
	I1206 10:34:44.431730  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:44.432129  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:44.432185  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:44.930734  522370 type.go:168] "Request Body" body=""
	I1206 10:34:44.930810  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:44.931202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:45.430889  522370 type.go:168] "Request Body" body=""
	I1206 10:34:45.430961  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:45.431255  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:45.930943  522370 type.go:168] "Request Body" body=""
	I1206 10:34:45.931026  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:45.931431  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:46.430744  522370 type.go:168] "Request Body" body=""
	I1206 10:34:46.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:46.431156  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:46.930832  522370 type.go:168] "Request Body" body=""
	I1206 10:34:46.930896  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:46.931177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:46.931219  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:47.430865  522370 type.go:168] "Request Body" body=""
	I1206 10:34:47.430941  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:47.431318  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:47.931392  522370 type.go:168] "Request Body" body=""
	I1206 10:34:47.931469  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:47.931802  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:48.431602  522370 type.go:168] "Request Body" body=""
	I1206 10:34:48.431696  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:48.432026  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:48.930775  522370 type.go:168] "Request Body" body=""
	I1206 10:34:48.930851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:48.931294  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:48.931353  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:49.431025  522370 type.go:168] "Request Body" body=""
	I1206 10:34:49.431108  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:49.431448  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:49.930724  522370 type.go:168] "Request Body" body=""
	I1206 10:34:49.930802  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:49.931116  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:50.430791  522370 type.go:168] "Request Body" body=""
	I1206 10:34:50.430867  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:50.431248  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:50.930784  522370 type.go:168] "Request Body" body=""
	I1206 10:34:50.930864  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:50.931205  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:51.430733  522370 type.go:168] "Request Body" body=""
	I1206 10:34:51.430811  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:51.431080  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:51.431150  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:51.930837  522370 type.go:168] "Request Body" body=""
	I1206 10:34:51.930930  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:51.931324  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:52.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:34:52.430851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:52.431202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:52.931267  522370 type.go:168] "Request Body" body=""
	I1206 10:34:52.931348  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:52.931664  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:53.431501  522370 type.go:168] "Request Body" body=""
	I1206 10:34:53.431595  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:53.431957  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:53.432013  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:53.931658  522370 type.go:168] "Request Body" body=""
	I1206 10:34:53.931738  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:53.932077  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:54.430749  522370 type.go:168] "Request Body" body=""
	I1206 10:34:54.430871  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:54.431247  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:54.930761  522370 type.go:168] "Request Body" body=""
	I1206 10:34:54.930837  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:54.931206  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:55.430922  522370 type.go:168] "Request Body" body=""
	I1206 10:34:55.431013  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:55.431352  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:55.930722  522370 type.go:168] "Request Body" body=""
	I1206 10:34:55.930788  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:55.931160  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:55.931217  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:56.430917  522370 type.go:168] "Request Body" body=""
	I1206 10:34:56.430995  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:56.431296  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:56.930987  522370 type.go:168] "Request Body" body=""
	I1206 10:34:56.931062  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:56.931423  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:57.430960  522370 type.go:168] "Request Body" body=""
	I1206 10:34:57.431029  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:57.431303  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:57.931550  522370 type.go:168] "Request Body" body=""
	I1206 10:34:57.931631  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:57.931966  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:57.932029  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:58.430730  522370 type.go:168] "Request Body" body=""
	I1206 10:34:58.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:58.431155  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:58.930843  522370 type.go:168] "Request Body" body=""
	I1206 10:34:58.930914  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:58.931207  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:59.430875  522370 type.go:168] "Request Body" body=""
	I1206 10:34:59.430950  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:59.431266  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:59.930814  522370 type.go:168] "Request Body" body=""
	I1206 10:34:59.930906  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:59.931260  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:00.430976  522370 type.go:168] "Request Body" body=""
	I1206 10:35:00.431061  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:00.431541  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:00.431605  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:00.931369  522370 type.go:168] "Request Body" body=""
	I1206 10:35:00.931476  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:00.931758  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:01.431561  522370 type.go:168] "Request Body" body=""
	I1206 10:35:01.431652  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:01.432065  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:01.930651  522370 type.go:168] "Request Body" body=""
	I1206 10:35:01.930724  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:01.930990  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:02.430729  522370 type.go:168] "Request Body" body=""
	I1206 10:35:02.430828  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:02.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:02.931011  522370 type.go:168] "Request Body" body=""
	I1206 10:35:02.931089  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:02.931442  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:02.931498  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:03.430733  522370 type.go:168] "Request Body" body=""
	I1206 10:35:03.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:03.431059  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:03.930760  522370 type.go:168] "Request Body" body=""
	I1206 10:35:03.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:03.931180  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:04.430888  522370 type.go:168] "Request Body" body=""
	I1206 10:35:04.430974  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:04.431297  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:04.930734  522370 type.go:168] "Request Body" body=""
	I1206 10:35:04.930812  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:04.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:05.430766  522370 type.go:168] "Request Body" body=""
	I1206 10:35:05.430845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:05.431226  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:05.431281  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:05.930825  522370 type.go:168] "Request Body" body=""
	I1206 10:35:05.930901  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:05.931256  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:06.430727  522370 type.go:168] "Request Body" body=""
	I1206 10:35:06.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:06.431148  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:06.930753  522370 type.go:168] "Request Body" body=""
	I1206 10:35:06.930834  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:06.931217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:07.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:35:07.430991  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:07.431345  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:07.431402  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:07.931619  522370 type.go:168] "Request Body" body=""
	I1206 10:35:07.931687  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:07.931937  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:08.430638  522370 type.go:168] "Request Body" body=""
	I1206 10:35:08.430708  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:08.431059  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:08.930771  522370 type.go:168] "Request Body" body=""
	I1206 10:35:08.930854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:08.931232  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:09.430960  522370 type.go:168] "Request Body" body=""
	I1206 10:35:09.431028  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:09.431338  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:09.930769  522370 type.go:168] "Request Body" body=""
	I1206 10:35:09.930843  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:09.931199  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:09.931252  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:10.430779  522370 type.go:168] "Request Body" body=""
	I1206 10:35:10.430854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:10.431226  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:10.930761  522370 type.go:168] "Request Body" body=""
	I1206 10:35:10.930829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:10.931111  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:11.430901  522370 type.go:168] "Request Body" body=""
	I1206 10:35:11.430975  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:11.431323  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:11.930769  522370 type.go:168] "Request Body" body=""
	I1206 10:35:11.930846  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:11.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:12.430718  522370 type.go:168] "Request Body" body=""
	I1206 10:35:12.430798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:12.431146  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:12.431211  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:12.931230  522370 type.go:168] "Request Body" body=""
	I1206 10:35:12.931308  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:12.931636  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:13.431462  522370 type.go:168] "Request Body" body=""
	I1206 10:35:13.431538  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:13.431885  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:13.931641  522370 type.go:168] "Request Body" body=""
	I1206 10:35:13.931714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:13.931987  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:14.430767  522370 type.go:168] "Request Body" body=""
	I1206 10:35:14.430841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:14.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:14.431257  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:14.930975  522370 type.go:168] "Request Body" body=""
	I1206 10:35:14.931053  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:14.931466  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:15.431217  522370 type.go:168] "Request Body" body=""
	I1206 10:35:15.431297  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:15.431580  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:15.931377  522370 type.go:168] "Request Body" body=""
	I1206 10:35:15.931454  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:15.931796  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:16.431484  522370 type.go:168] "Request Body" body=""
	I1206 10:35:16.431559  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:16.431888  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:16.431945  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:16.931644  522370 type.go:168] "Request Body" body=""
	I1206 10:35:16.931713  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:16.931977  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:17.430728  522370 type.go:168] "Request Body" body=""
	I1206 10:35:17.430856  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:17.431208  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:17.931466  522370 type.go:168] "Request Body" body=""
	I1206 10:35:17.931549  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:17.931886  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:18.431642  522370 type.go:168] "Request Body" body=""
	I1206 10:35:18.431714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:18.431964  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:18.432006  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:18.930687  522370 type.go:168] "Request Body" body=""
	I1206 10:35:18.930760  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:18.931117  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:19.430852  522370 type.go:168] "Request Body" body=""
	I1206 10:35:19.430938  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:19.431325  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:19.930751  522370 type.go:168] "Request Body" body=""
	I1206 10:35:19.930852  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:19.931255  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:20.430723  522370 type.go:168] "Request Body" body=""
	I1206 10:35:20.430804  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:20.431177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:20.930767  522370 type.go:168] "Request Body" body=""
	I1206 10:35:20.930845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:20.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:20.931244  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:21.430727  522370 type.go:168] "Request Body" body=""
	I1206 10:35:21.430804  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:21.431059  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:21.930732  522370 type.go:168] "Request Body" body=""
	I1206 10:35:21.930815  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:21.931186  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:22.430734  522370 type.go:168] "Request Body" body=""
	I1206 10:35:22.430810  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:22.431194  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:22.931191  522370 type.go:168] "Request Body" body=""
	I1206 10:35:22.931266  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:22.931524  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:22.931567  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:23.431346  522370 type.go:168] "Request Body" body=""
	I1206 10:35:23.431424  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:23.431932  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:23.930769  522370 type.go:168] "Request Body" body=""
	I1206 10:35:23.930843  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:23.931196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:24.430741  522370 type.go:168] "Request Body" body=""
	I1206 10:35:24.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:24.431074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:24.930743  522370 type.go:168] "Request Body" body=""
	I1206 10:35:24.930825  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:24.931196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:25.430898  522370 type.go:168] "Request Body" body=""
	I1206 10:35:25.430975  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:25.431343  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:25.431399  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:25.931031  522370 type.go:168] "Request Body" body=""
	I1206 10:35:25.931103  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:25.931404  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:26.430767  522370 type.go:168] "Request Body" body=""
	I1206 10:35:26.430843  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:26.431170  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:26.930780  522370 type.go:168] "Request Body" body=""
	I1206 10:35:26.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:26.931215  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:27.430765  522370 type.go:168] "Request Body" body=""
	I1206 10:35:27.430836  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:27.431109  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:27.931322  522370 type.go:168] "Request Body" body=""
	I1206 10:35:27.931408  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:27.931759  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:27.931820  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:28.430752  522370 type.go:168] "Request Body" body=""
	I1206 10:35:28.430847  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:28.431179  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:28.930742  522370 type.go:168] "Request Body" body=""
	I1206 10:35:28.930795  522370 node_ready.go:38] duration metric: took 6m0.000265171s for node "functional-123579" to be "Ready" ...
	I1206 10:35:28.934235  522370 out.go:203] 
	W1206 10:35:28.937230  522370 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 10:35:28.937255  522370 out.go:285] * 
	* 
	W1206 10:35:28.939411  522370 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:35:28.942269  522370 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-123579 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.752039587s for "functional-123579" cluster.
I1206 10:35:29.577274  488068 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-123579
helpers_test.go:243: (dbg) docker inspect functional-123579:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	        "Created": "2025-12-06T10:21:05.490589445Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 516908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:21:05.573219423Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hostname",
	        "HostsPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hosts",
	        "LogPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721-json.log",
	        "Name": "/functional-123579",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-123579:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-123579",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	                "LowerDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f-init/diff:/var/lib/docker/overlay2/cc06c0f1f442a7275dc247974ca9074508813cfb842de89bc5bb1dae1e824222/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-123579",
	                "Source": "/var/lib/docker/volumes/functional-123579/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-123579",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-123579",
	                "name.minikube.sigs.k8s.io": "functional-123579",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10921d51d4ec866d78853297249318b04ef864639c8e07349985c5733ba03a26",
	            "SandboxKey": "/var/run/docker/netns/10921d51d4ec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-123579": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:5b:29:c4:a4:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa75a7cb7ddfb7086d66f629904d681a84e2c9da78725396c4dc859cfc5aa536",
	                    "EndpointID": "eff9632b5a6c335169f4a61b3c9f1727c30b30183ac61ac9730ddb7b0d19cf24",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-123579",
	                        "86e8d3865f80"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579: exit status 2 (420.829003ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-123579 logs -n 25: (1.076478785s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-137526 ssh findmnt -T /mount1                                                                                                          │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ mount          │ -p functional-137526 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2137041211/001:/mount1 --alsologtostderr -v=1                                │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ mount          │ -p functional-137526 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2137041211/001:/mount2 --alsologtostderr -v=1                                │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ mount          │ -p functional-137526 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2137041211/001:/mount3 --alsologtostderr -v=1                                │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ start          │ -p functional-137526 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                         │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ ssh            │ functional-137526 ssh findmnt -T /mount1                                                                                                          │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ start          │ -p functional-137526 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                   │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ start          │ -p functional-137526 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                         │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ ssh            │ functional-137526 ssh findmnt -T /mount2                                                                                                          │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ dashboard      │ --url --port 36195 -p functional-137526 --alsologtostderr -v=1                                                                                    │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ ssh            │ functional-137526 ssh findmnt -T /mount3                                                                                                          │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ mount          │ -p functional-137526 --kill=true                                                                                                                  │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ update-context │ functional-137526 update-context --alsologtostderr -v=2                                                                                           │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ update-context │ functional-137526 update-context --alsologtostderr -v=2                                                                                           │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ update-context │ functional-137526 update-context --alsologtostderr -v=2                                                                                           │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image          │ functional-137526 image ls --format short --alsologtostderr                                                                                       │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image          │ functional-137526 image ls --format yaml --alsologtostderr                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ ssh            │ functional-137526 ssh pgrep buildkitd                                                                                                             │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ image          │ functional-137526 image build -t localhost/my-image:functional-137526 testdata/build --alsologtostderr                                            │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image          │ functional-137526 image ls --format json --alsologtostderr                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image          │ functional-137526 image ls --format table --alsologtostderr                                                                                       │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image          │ functional-137526 image ls                                                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ delete         │ -p functional-137526                                                                                                                              │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:21 UTC │
	│ start          │ -p functional-123579 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:21 UTC │                     │
	│ start          │ -p functional-123579 --alsologtostderr -v=8                                                                                                       │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:29 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:29:22
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:29:22.870980  522370 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:29:22.871170  522370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:22.871181  522370 out.go:374] Setting ErrFile to fd 2...
	I1206 10:29:22.871187  522370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:22.871464  522370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:29:22.871865  522370 out.go:368] Setting JSON to false
	I1206 10:29:22.872761  522370 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11514,"bootTime":1765005449,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:29:22.872829  522370 start.go:143] virtualization:  
	I1206 10:29:22.876360  522370 out.go:179] * [functional-123579] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:29:22.880135  522370 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:29:22.880243  522370 notify.go:221] Checking for updates...
	I1206 10:29:22.885979  522370 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:29:22.888900  522370 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:22.891673  522370 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:29:22.894419  522370 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:29:22.897199  522370 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:29:22.900505  522370 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:29:22.900663  522370 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:29:22.930035  522370 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:29:22.930154  522370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:29:22.994169  522370 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:29:22.985097483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:29:22.994270  522370 docker.go:319] overlay module found
	I1206 10:29:22.997336  522370 out.go:179] * Using the docker driver based on existing profile
	I1206 10:29:23.000134  522370 start.go:309] selected driver: docker
	I1206 10:29:23.000177  522370 start.go:927] validating driver "docker" against &{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:29:23.000290  522370 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:29:23.000407  522370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:29:23.064912  522370 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:29:23.055716934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:29:23.065339  522370 cni.go:84] Creating CNI manager for ""
	I1206 10:29:23.065406  522370 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:29:23.065455  522370 start.go:353] cluster config:
	{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:29:23.068684  522370 out.go:179] * Starting "functional-123579" primary control-plane node in "functional-123579" cluster
	I1206 10:29:23.071544  522370 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:29:23.074549  522370 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:29:23.077588  522370 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:29:23.077638  522370 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:29:23.077648  522370 cache.go:65] Caching tarball of preloaded images
	I1206 10:29:23.077715  522370 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:29:23.077742  522370 preload.go:238] Found /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:29:23.077753  522370 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:29:23.077861  522370 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/config.json ...
	I1206 10:29:23.100973  522370 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:29:23.100996  522370 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:29:23.101011  522370 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:29:23.101047  522370 start.go:360] acquireMachinesLock for functional-123579: {Name:mk35a9adf20f50a3c49b774a4ee092917f16cc66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:29:23.101106  522370 start.go:364] duration metric: took 36.569µs to acquireMachinesLock for "functional-123579"
	I1206 10:29:23.101131  522370 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:29:23.101140  522370 fix.go:54] fixHost starting: 
	I1206 10:29:23.101403  522370 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:29:23.120661  522370 fix.go:112] recreateIfNeeded on functional-123579: state=Running err=<nil>
	W1206 10:29:23.120697  522370 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:29:23.124123  522370 out.go:252] * Updating the running docker "functional-123579" container ...
	I1206 10:29:23.124169  522370 machine.go:94] provisionDockerMachine start ...
	I1206 10:29:23.124278  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:23.148209  522370 main.go:143] libmachine: Using SSH client type: native
	I1206 10:29:23.148655  522370 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:29:23.148670  522370 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:29:23.311217  522370 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:29:23.311246  522370 ubuntu.go:182] provisioning hostname "functional-123579"
	I1206 10:29:23.311337  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:23.330615  522370 main.go:143] libmachine: Using SSH client type: native
	I1206 10:29:23.330948  522370 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:29:23.330967  522370 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-123579 && echo "functional-123579" | sudo tee /etc/hostname
	I1206 10:29:23.492326  522370 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:29:23.492442  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:23.511425  522370 main.go:143] libmachine: Using SSH client type: native
	I1206 10:29:23.511745  522370 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:29:23.511767  522370 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-123579' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-123579/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-123579' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:29:23.663802  522370 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:29:23.663828  522370 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-484819/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-484819/.minikube}
	I1206 10:29:23.663852  522370 ubuntu.go:190] setting up certificates
	I1206 10:29:23.663862  522370 provision.go:84] configureAuth start
	I1206 10:29:23.663938  522370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:29:23.683626  522370 provision.go:143] copyHostCerts
	I1206 10:29:23.683677  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 10:29:23.683720  522370 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem, removing ...
	I1206 10:29:23.683732  522370 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 10:29:23.683811  522370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem (1082 bytes)
	I1206 10:29:23.683905  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 10:29:23.683927  522370 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem, removing ...
	I1206 10:29:23.683935  522370 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 10:29:23.683965  522370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem (1123 bytes)
	I1206 10:29:23.684012  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 10:29:23.684032  522370 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem, removing ...
	I1206 10:29:23.684040  522370 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 10:29:23.684065  522370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem (1675 bytes)
	I1206 10:29:23.684117  522370 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem org=jenkins.functional-123579 san=[127.0.0.1 192.168.49.2 functional-123579 localhost minikube]
	I1206 10:29:23.851072  522370 provision.go:177] copyRemoteCerts
	I1206 10:29:23.851167  522370 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:29:23.851208  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:23.869258  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:23.976487  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 10:29:23.976551  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:29:23.994935  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 10:29:23.995001  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:29:24.028988  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 10:29:24.029065  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:29:24.047435  522370 provision.go:87] duration metric: took 383.548866ms to configureAuth
	I1206 10:29:24.047460  522370 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:29:24.047651  522370 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:29:24.047753  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.065906  522370 main.go:143] libmachine: Using SSH client type: native
	I1206 10:29:24.066279  522370 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:29:24.066304  522370 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:29:24.394899  522370 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:29:24.394922  522370 machine.go:97] duration metric: took 1.270744832s to provisionDockerMachine
	I1206 10:29:24.394933  522370 start.go:293] postStartSetup for "functional-123579" (driver="docker")
	I1206 10:29:24.394946  522370 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:29:24.395040  522370 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:29:24.395089  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.413037  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:24.518950  522370 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:29:24.522167  522370 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1206 10:29:24.522190  522370 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1206 10:29:24.522196  522370 command_runner.go:130] > VERSION_ID="12"
	I1206 10:29:24.522201  522370 command_runner.go:130] > VERSION="12 (bookworm)"
	I1206 10:29:24.522206  522370 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1206 10:29:24.522219  522370 command_runner.go:130] > ID=debian
	I1206 10:29:24.522224  522370 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1206 10:29:24.522228  522370 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1206 10:29:24.522234  522370 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1206 10:29:24.522273  522370 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:29:24.522296  522370 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:29:24.522307  522370 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/addons for local assets ...
	I1206 10:29:24.522366  522370 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/files for local assets ...
	I1206 10:29:24.522448  522370 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> 4880682.pem in /etc/ssl/certs
	I1206 10:29:24.522465  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> /etc/ssl/certs/4880682.pem
	I1206 10:29:24.522539  522370 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts -> hosts in /etc/test/nested/copy/488068
	I1206 10:29:24.522547  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts -> /etc/test/nested/copy/488068/hosts
	I1206 10:29:24.522590  522370 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488068
	I1206 10:29:24.529941  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:29:24.547406  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts --> /etc/test/nested/copy/488068/hosts (40 bytes)
	I1206 10:29:24.564885  522370 start.go:296] duration metric: took 169.937214ms for postStartSetup
	I1206 10:29:24.565009  522370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:29:24.565071  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.582051  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:24.684564  522370 command_runner.go:130] > 18%
	I1206 10:29:24.685308  522370 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:29:24.690194  522370 command_runner.go:130] > 161G
	I1206 10:29:24.690863  522370 fix.go:56] duration metric: took 1.589719046s for fixHost
	I1206 10:29:24.690882  522370 start.go:83] releasing machines lock for "functional-123579", held for 1.589762361s
	I1206 10:29:24.690959  522370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:29:24.710139  522370 ssh_runner.go:195] Run: cat /version.json
	I1206 10:29:24.710198  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.710437  522370 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:29:24.710491  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.744752  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:24.750995  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:24.850618  522370 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1206 10:29:24.850833  522370 ssh_runner.go:195] Run: systemctl --version
	I1206 10:29:24.941044  522370 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 10:29:24.943691  522370 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1206 10:29:24.943731  522370 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 10:29:24.943796  522370 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:29:24.982406  522370 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 10:29:24.986710  522370 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 10:29:24.986856  522370 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:29:24.986921  522370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:29:24.995206  522370 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:29:24.995230  522370 start.go:496] detecting cgroup driver to use...
	I1206 10:29:24.995260  522370 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:29:24.995314  522370 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:29:25.015488  522370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:29:25.029388  522370 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:29:25.029474  522370 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:29:25.044588  522370 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:29:25.057886  522370 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:29:25.175907  522370 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:29:25.297406  522370 docker.go:234] disabling docker service ...
	I1206 10:29:25.297502  522370 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:29:25.313940  522370 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:29:25.326948  522370 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:29:25.448237  522370 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:29:25.592886  522370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:29:25.605716  522370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:29:25.618765  522370 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1206 10:29:25.620045  522370 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:29:25.620120  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.628683  522370 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:29:25.628808  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.637855  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.646676  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.656251  522370 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:29:25.664395  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.673385  522370 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.681859  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.691317  522370 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:29:25.697883  522370 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 10:29:25.698954  522370 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:29:25.706470  522370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:29:25.835287  522370 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:29:25.994073  522370 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:29:25.994183  522370 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:29:25.998083  522370 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1206 10:29:25.998204  522370 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 10:29:25.998238  522370 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1206 10:29:25.998335  522370 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:29:25.998358  522370 command_runner.go:130] > Access: 2025-12-06 10:29:25.948140155 +0000
	I1206 10:29:25.998390  522370 command_runner.go:130] > Modify: 2025-12-06 10:29:25.948140155 +0000
	I1206 10:29:25.998420  522370 command_runner.go:130] > Change: 2025-12-06 10:29:25.948140155 +0000
	I1206 10:29:25.998437  522370 command_runner.go:130] >  Birth: -
	I1206 10:29:25.998473  522370 start.go:564] Will wait 60s for crictl version
	I1206 10:29:25.998553  522370 ssh_runner.go:195] Run: which crictl
	I1206 10:29:26.004847  522370 command_runner.go:130] > /usr/local/bin/crictl
	I1206 10:29:26.004981  522370 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:29:26.037391  522370 command_runner.go:130] > Version:  0.1.0
	I1206 10:29:26.037414  522370 command_runner.go:130] > RuntimeName:  cri-o
	I1206 10:29:26.037421  522370 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1206 10:29:26.037427  522370 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 10:29:26.037438  522370 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:29:26.037548  522370 ssh_runner.go:195] Run: crio --version
	I1206 10:29:26.065733  522370 command_runner.go:130] > crio version 1.34.3
	I1206 10:29:26.065769  522370 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1206 10:29:26.065793  522370 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1206 10:29:26.065805  522370 command_runner.go:130] >    GitTreeState:   dirty
	I1206 10:29:26.065811  522370 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1206 10:29:26.065822  522370 command_runner.go:130] >    GoVersion:      go1.24.6
	I1206 10:29:26.065827  522370 command_runner.go:130] >    Compiler:       gc
	I1206 10:29:26.065832  522370 command_runner.go:130] >    Platform:       linux/arm64
	I1206 10:29:26.065840  522370 command_runner.go:130] >    Linkmode:       static
	I1206 10:29:26.065845  522370 command_runner.go:130] >    BuildTags:
	I1206 10:29:26.065852  522370 command_runner.go:130] >      static
	I1206 10:29:26.065886  522370 command_runner.go:130] >      netgo
	I1206 10:29:26.065897  522370 command_runner.go:130] >      osusergo
	I1206 10:29:26.065918  522370 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1206 10:29:26.065928  522370 command_runner.go:130] >      seccomp
	I1206 10:29:26.065932  522370 command_runner.go:130] >      apparmor
	I1206 10:29:26.065941  522370 command_runner.go:130] >      selinux
	I1206 10:29:26.065946  522370 command_runner.go:130] >    LDFlags:          unknown
	I1206 10:29:26.065954  522370 command_runner.go:130] >    SeccompEnabled:   true
	I1206 10:29:26.065958  522370 command_runner.go:130] >    AppArmorEnabled:  false
	I1206 10:29:26.068082  522370 ssh_runner.go:195] Run: crio --version
	I1206 10:29:26.095375  522370 command_runner.go:130] > crio version 1.34.3
	I1206 10:29:26.095453  522370 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1206 10:29:26.095474  522370 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1206 10:29:26.095491  522370 command_runner.go:130] >    GitTreeState:   dirty
	I1206 10:29:26.095522  522370 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1206 10:29:26.095561  522370 command_runner.go:130] >    GoVersion:      go1.24.6
	I1206 10:29:26.095582  522370 command_runner.go:130] >    Compiler:       gc
	I1206 10:29:26.095622  522370 command_runner.go:130] >    Platform:       linux/arm64
	I1206 10:29:26.095651  522370 command_runner.go:130] >    Linkmode:       static
	I1206 10:29:26.095669  522370 command_runner.go:130] >    BuildTags:
	I1206 10:29:26.095698  522370 command_runner.go:130] >      static
	I1206 10:29:26.095717  522370 command_runner.go:130] >      netgo
	I1206 10:29:26.095735  522370 command_runner.go:130] >      osusergo
	I1206 10:29:26.095756  522370 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1206 10:29:26.095787  522370 command_runner.go:130] >      seccomp
	I1206 10:29:26.095810  522370 command_runner.go:130] >      apparmor
	I1206 10:29:26.095867  522370 command_runner.go:130] >      selinux
	I1206 10:29:26.095888  522370 command_runner.go:130] >    LDFlags:          unknown
	I1206 10:29:26.095910  522370 command_runner.go:130] >    SeccompEnabled:   true
	I1206 10:29:26.095930  522370 command_runner.go:130] >    AppArmorEnabled:  false
	I1206 10:29:26.103062  522370 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:29:26.105990  522370 cli_runner.go:164] Run: docker network inspect functional-123579 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:29:26.122102  522370 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:29:26.125939  522370 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1206 10:29:26.126304  522370 kubeadm.go:884] updating cluster {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:29:26.126416  522370 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:29:26.126475  522370 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:29:26.161627  522370 command_runner.go:130] > {
	I1206 10:29:26.161646  522370 command_runner.go:130] >   "images":  [
	I1206 10:29:26.161650  522370 command_runner.go:130] >     {
	I1206 10:29:26.161662  522370 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:29:26.161666  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161672  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:29:26.161676  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161681  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161689  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1206 10:29:26.161697  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1206 10:29:26.161702  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161707  522370 command_runner.go:130] >       "size":  "111333938",
	I1206 10:29:26.161711  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.161719  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.161729  522370 command_runner.go:130] >     },
	I1206 10:29:26.161732  522370 command_runner.go:130] >     {
	I1206 10:29:26.161739  522370 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:29:26.161743  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161748  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:29:26.161751  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161757  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161765  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1206 10:29:26.161774  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:29:26.161777  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161781  522370 command_runner.go:130] >       "size":  "29037500",
	I1206 10:29:26.161785  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.161792  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.161795  522370 command_runner.go:130] >     },
	I1206 10:29:26.161799  522370 command_runner.go:130] >     {
	I1206 10:29:26.161805  522370 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:29:26.161810  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161815  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:29:26.161818  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161822  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161830  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1206 10:29:26.161838  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1206 10:29:26.161843  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161847  522370 command_runner.go:130] >       "size":  "74491780",
	I1206 10:29:26.161851  522370 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:29:26.161856  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.161859  522370 command_runner.go:130] >     },
	I1206 10:29:26.161863  522370 command_runner.go:130] >     {
	I1206 10:29:26.161869  522370 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:29:26.161873  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161878  522370 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:29:26.161883  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161887  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161898  522370 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1206 10:29:26.161905  522370 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1206 10:29:26.161908  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161912  522370 command_runner.go:130] >       "size":  "60857170",
	I1206 10:29:26.161916  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.161920  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.161923  522370 command_runner.go:130] >       },
	I1206 10:29:26.161935  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.161939  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.161942  522370 command_runner.go:130] >     },
	I1206 10:29:26.161946  522370 command_runner.go:130] >     {
	I1206 10:29:26.161953  522370 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:29:26.161956  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161963  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:29:26.161966  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161970  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161978  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1206 10:29:26.161986  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1206 10:29:26.161990  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161994  522370 command_runner.go:130] >       "size":  "84949999",
	I1206 10:29:26.161997  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.162001  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.162004  522370 command_runner.go:130] >       },
	I1206 10:29:26.162008  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162011  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.162014  522370 command_runner.go:130] >     },
	I1206 10:29:26.162018  522370 command_runner.go:130] >     {
	I1206 10:29:26.162024  522370 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:29:26.162028  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.162033  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:29:26.162037  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162041  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.162050  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1206 10:29:26.162067  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1206 10:29:26.162071  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162075  522370 command_runner.go:130] >       "size":  "72170325",
	I1206 10:29:26.162081  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.162091  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.162094  522370 command_runner.go:130] >       },
	I1206 10:29:26.162098  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162102  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.162105  522370 command_runner.go:130] >     },
	I1206 10:29:26.162115  522370 command_runner.go:130] >     {
	I1206 10:29:26.162123  522370 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:29:26.162128  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.162134  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:29:26.162137  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162143  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.162154  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1206 10:29:26.162163  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:29:26.162166  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162170  522370 command_runner.go:130] >       "size":  "74106775",
	I1206 10:29:26.162173  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162178  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.162181  522370 command_runner.go:130] >     },
	I1206 10:29:26.162184  522370 command_runner.go:130] >     {
	I1206 10:29:26.162191  522370 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:29:26.162194  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.162200  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:29:26.162203  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162207  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.162215  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1206 10:29:26.162232  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1206 10:29:26.162235  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162239  522370 command_runner.go:130] >       "size":  "49822549",
	I1206 10:29:26.162243  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.162250  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.162253  522370 command_runner.go:130] >       },
	I1206 10:29:26.162257  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162260  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.162263  522370 command_runner.go:130] >     },
	I1206 10:29:26.162267  522370 command_runner.go:130] >     {
	I1206 10:29:26.162273  522370 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:29:26.162277  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.162281  522370 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:29:26.162284  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162288  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.162296  522370 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1206 10:29:26.162304  522370 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1206 10:29:26.162307  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162311  522370 command_runner.go:130] >       "size":  "519884",
	I1206 10:29:26.162315  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.162318  522370 command_runner.go:130] >         "value":  "65535"
	I1206 10:29:26.162321  522370 command_runner.go:130] >       },
	I1206 10:29:26.162325  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162329  522370 command_runner.go:130] >       "pinned":  true
	I1206 10:29:26.162333  522370 command_runner.go:130] >     }
	I1206 10:29:26.162336  522370 command_runner.go:130] >   ]
	I1206 10:29:26.162339  522370 command_runner.go:130] > }
	I1206 10:29:26.164653  522370 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:29:26.164677  522370 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:29:26.164733  522370 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:29:26.190066  522370 command_runner.go:130] > {
	I1206 10:29:26.190096  522370 command_runner.go:130] >   "images":  [
	I1206 10:29:26.190102  522370 command_runner.go:130] >     {
	I1206 10:29:26.190111  522370 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:29:26.190116  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190122  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:29:26.190126  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190130  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190139  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1206 10:29:26.190147  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1206 10:29:26.190155  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190160  522370 command_runner.go:130] >       "size":  "111333938",
	I1206 10:29:26.190164  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190168  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190171  522370 command_runner.go:130] >     },
	I1206 10:29:26.190174  522370 command_runner.go:130] >     {
	I1206 10:29:26.190181  522370 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:29:26.190184  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190189  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:29:26.190193  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190197  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190205  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1206 10:29:26.190213  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:29:26.190216  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190220  522370 command_runner.go:130] >       "size":  "29037500",
	I1206 10:29:26.190224  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190229  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190232  522370 command_runner.go:130] >     },
	I1206 10:29:26.190235  522370 command_runner.go:130] >     {
	I1206 10:29:26.190241  522370 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:29:26.190245  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190250  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:29:26.190254  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190257  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190265  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1206 10:29:26.190273  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1206 10:29:26.190277  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190281  522370 command_runner.go:130] >       "size":  "74491780",
	I1206 10:29:26.190285  522370 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:29:26.190289  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190292  522370 command_runner.go:130] >     },
	I1206 10:29:26.190295  522370 command_runner.go:130] >     {
	I1206 10:29:26.190301  522370 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:29:26.190308  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190313  522370 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:29:26.190317  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190322  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190329  522370 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1206 10:29:26.190336  522370 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1206 10:29:26.190339  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190343  522370 command_runner.go:130] >       "size":  "60857170",
	I1206 10:29:26.190346  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190350  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.190353  522370 command_runner.go:130] >       },
	I1206 10:29:26.190364  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190369  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190372  522370 command_runner.go:130] >     },
	I1206 10:29:26.190374  522370 command_runner.go:130] >     {
	I1206 10:29:26.190381  522370 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:29:26.190384  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190389  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:29:26.190392  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190396  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190403  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1206 10:29:26.190412  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1206 10:29:26.190415  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190419  522370 command_runner.go:130] >       "size":  "84949999",
	I1206 10:29:26.190422  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190425  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.190428  522370 command_runner.go:130] >       },
	I1206 10:29:26.190432  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190436  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190439  522370 command_runner.go:130] >     },
	I1206 10:29:26.190441  522370 command_runner.go:130] >     {
	I1206 10:29:26.190448  522370 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:29:26.190452  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190460  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:29:26.190464  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190467  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190476  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1206 10:29:26.190484  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1206 10:29:26.190486  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190490  522370 command_runner.go:130] >       "size":  "72170325",
	I1206 10:29:26.190493  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190497  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.190500  522370 command_runner.go:130] >       },
	I1206 10:29:26.190504  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190507  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190514  522370 command_runner.go:130] >     },
	I1206 10:29:26.190517  522370 command_runner.go:130] >     {
	I1206 10:29:26.190524  522370 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:29:26.190528  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190533  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:29:26.190536  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190540  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190547  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1206 10:29:26.190554  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:29:26.190557  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190561  522370 command_runner.go:130] >       "size":  "74106775",
	I1206 10:29:26.190565  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190569  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190572  522370 command_runner.go:130] >     },
	I1206 10:29:26.190574  522370 command_runner.go:130] >     {
	I1206 10:29:26.190581  522370 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:29:26.190584  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190590  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:29:26.190593  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190597  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190604  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1206 10:29:26.190628  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1206 10:29:26.190632  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190636  522370 command_runner.go:130] >       "size":  "49822549",
	I1206 10:29:26.190639  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190643  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.190646  522370 command_runner.go:130] >       },
	I1206 10:29:26.190650  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190653  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190656  522370 command_runner.go:130] >     },
	I1206 10:29:26.190659  522370 command_runner.go:130] >     {
	I1206 10:29:26.190665  522370 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:29:26.190669  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190673  522370 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:29:26.190676  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190680  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190687  522370 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1206 10:29:26.190694  522370 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1206 10:29:26.190697  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190701  522370 command_runner.go:130] >       "size":  "519884",
	I1206 10:29:26.190705  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190709  522370 command_runner.go:130] >         "value":  "65535"
	I1206 10:29:26.190712  522370 command_runner.go:130] >       },
	I1206 10:29:26.190716  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190719  522370 command_runner.go:130] >       "pinned":  true
	I1206 10:29:26.190722  522370 command_runner.go:130] >     }
	I1206 10:29:26.190724  522370 command_runner.go:130] >   ]
	I1206 10:29:26.190728  522370 command_runner.go:130] > }
	I1206 10:29:26.192099  522370 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:29:26.192121  522370 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:29:26.192130  522370 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:29:26.192245  522370 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-123579 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:29:26.192338  522370 ssh_runner.go:195] Run: crio config
	I1206 10:29:26.220366  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.219989922Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1206 10:29:26.220411  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.220176363Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1206 10:29:26.220654  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.22050187Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1206 10:29:26.220871  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.220715248Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1206 10:29:26.221165  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.22098899Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:26.221621  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.221432459Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1206 10:29:26.238478  522370 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1206 10:29:26.263608  522370 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1206 10:29:26.263638  522370 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1206 10:29:26.263647  522370 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1206 10:29:26.263651  522370 command_runner.go:130] > #
	I1206 10:29:26.263687  522370 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1206 10:29:26.263707  522370 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1206 10:29:26.263714  522370 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1206 10:29:26.263721  522370 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1206 10:29:26.263726  522370 command_runner.go:130] > # reload'.
	I1206 10:29:26.263732  522370 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1206 10:29:26.263756  522370 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1206 10:29:26.263778  522370 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1206 10:29:26.263789  522370 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1206 10:29:26.263793  522370 command_runner.go:130] > [crio]
	I1206 10:29:26.263802  522370 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1206 10:29:26.263811  522370 command_runner.go:130] > # containers images, in this directory.
	I1206 10:29:26.263826  522370 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1206 10:29:26.263848  522370 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1206 10:29:26.263868  522370 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1206 10:29:26.263877  522370 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1206 10:29:26.263885  522370 command_runner.go:130] > # imagestore = ""
	I1206 10:29:26.263894  522370 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1206 10:29:26.263901  522370 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1206 10:29:26.263908  522370 command_runner.go:130] > # storage_driver = "overlay"
	I1206 10:29:26.263914  522370 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1206 10:29:26.263920  522370 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1206 10:29:26.263936  522370 command_runner.go:130] > # storage_option = [
	I1206 10:29:26.263952  522370 command_runner.go:130] > # ]
	I1206 10:29:26.263965  522370 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1206 10:29:26.263972  522370 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1206 10:29:26.263985  522370 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1206 10:29:26.263995  522370 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1206 10:29:26.264002  522370 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1206 10:29:26.264006  522370 command_runner.go:130] > # always happen on a node reboot
	I1206 10:29:26.264013  522370 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1206 10:29:26.264036  522370 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1206 10:29:26.264050  522370 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1206 10:29:26.264055  522370 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1206 10:29:26.264060  522370 command_runner.go:130] > # version_file_persist = ""
	I1206 10:29:26.264078  522370 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1206 10:29:26.264092  522370 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1206 10:29:26.264096  522370 command_runner.go:130] > # internal_wipe = true
	I1206 10:29:26.264105  522370 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1206 10:29:26.264113  522370 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1206 10:29:26.264117  522370 command_runner.go:130] > # internal_repair = true
	I1206 10:29:26.264124  522370 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1206 10:29:26.264131  522370 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1206 10:29:26.264150  522370 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1206 10:29:26.264171  522370 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1206 10:29:26.264181  522370 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1206 10:29:26.264188  522370 command_runner.go:130] > [crio.api]
	I1206 10:29:26.264194  522370 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1206 10:29:26.264202  522370 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1206 10:29:26.264208  522370 command_runner.go:130] > # IP address on which the stream server will listen.
	I1206 10:29:26.264214  522370 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1206 10:29:26.264221  522370 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1206 10:29:26.264226  522370 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1206 10:29:26.264241  522370 command_runner.go:130] > # stream_port = "0"
	I1206 10:29:26.264256  522370 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1206 10:29:26.264261  522370 command_runner.go:130] > # stream_enable_tls = false
	I1206 10:29:26.264279  522370 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1206 10:29:26.264295  522370 command_runner.go:130] > # stream_idle_timeout = ""
	I1206 10:29:26.264302  522370 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1206 10:29:26.264317  522370 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1206 10:29:26.264326  522370 command_runner.go:130] > # stream_tls_cert = ""
	I1206 10:29:26.264332  522370 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1206 10:29:26.264338  522370 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1206 10:29:26.264355  522370 command_runner.go:130] > # stream_tls_key = ""
	I1206 10:29:26.264373  522370 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1206 10:29:26.264389  522370 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1206 10:29:26.264395  522370 command_runner.go:130] > # automatically pick up the changes.
	I1206 10:29:26.264399  522370 command_runner.go:130] > # stream_tls_ca = ""
	I1206 10:29:26.264435  522370 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1206 10:29:26.264448  522370 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1206 10:29:26.264456  522370 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1206 10:29:26.264460  522370 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1206 10:29:26.264467  522370 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1206 10:29:26.264476  522370 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1206 10:29:26.264479  522370 command_runner.go:130] > [crio.runtime]
	I1206 10:29:26.264489  522370 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1206 10:29:26.264495  522370 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1206 10:29:26.264506  522370 command_runner.go:130] > # "nofile=1024:2048"
	I1206 10:29:26.264513  522370 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1206 10:29:26.264524  522370 command_runner.go:130] > # default_ulimits = [
	I1206 10:29:26.264527  522370 command_runner.go:130] > # ]
	I1206 10:29:26.264534  522370 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1206 10:29:26.264543  522370 command_runner.go:130] > # no_pivot = false
	I1206 10:29:26.264549  522370 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1206 10:29:26.264555  522370 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1206 10:29:26.264561  522370 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1206 10:29:26.264569  522370 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1206 10:29:26.264576  522370 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1206 10:29:26.264584  522370 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 10:29:26.264591  522370 command_runner.go:130] > # conmon = ""
	I1206 10:29:26.264595  522370 command_runner.go:130] > # Cgroup setting for conmon
	I1206 10:29:26.264602  522370 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1206 10:29:26.264612  522370 command_runner.go:130] > conmon_cgroup = "pod"
	I1206 10:29:26.264623  522370 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1206 10:29:26.264629  522370 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1206 10:29:26.264643  522370 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 10:29:26.264647  522370 command_runner.go:130] > # conmon_env = [
	I1206 10:29:26.264650  522370 command_runner.go:130] > # ]
	I1206 10:29:26.264655  522370 command_runner.go:130] > # Additional environment variables to set for all the
	I1206 10:29:26.264660  522370 command_runner.go:130] > # containers. These are overridden if set in the
	I1206 10:29:26.264668  522370 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1206 10:29:26.264674  522370 command_runner.go:130] > # default_env = [
	I1206 10:29:26.264677  522370 command_runner.go:130] > # ]
	I1206 10:29:26.264683  522370 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1206 10:29:26.264699  522370 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1206 10:29:26.264703  522370 command_runner.go:130] > # selinux = false
	I1206 10:29:26.264710  522370 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1206 10:29:26.264720  522370 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1206 10:29:26.264729  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.264734  522370 command_runner.go:130] > # seccomp_profile = ""
	I1206 10:29:26.264740  522370 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1206 10:29:26.264745  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.264751  522370 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1206 10:29:26.264759  522370 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1206 10:29:26.264767  522370 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1206 10:29:26.264774  522370 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1206 10:29:26.264789  522370 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1206 10:29:26.264794  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.264799  522370 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1206 10:29:26.264807  522370 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1206 10:29:26.264817  522370 command_runner.go:130] > # the cgroup blockio controller.
	I1206 10:29:26.264821  522370 command_runner.go:130] > # blockio_config_file = ""
	I1206 10:29:26.264828  522370 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1206 10:29:26.264834  522370 command_runner.go:130] > # blockio parameters.
	I1206 10:29:26.264838  522370 command_runner.go:130] > # blockio_reload = false
	I1206 10:29:26.264849  522370 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1206 10:29:26.264856  522370 command_runner.go:130] > # irqbalance daemon.
	I1206 10:29:26.264862  522370 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1206 10:29:26.264868  522370 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1206 10:29:26.264877  522370 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1206 10:29:26.264889  522370 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1206 10:29:26.264897  522370 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1206 10:29:26.264904  522370 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1206 10:29:26.264910  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.264917  522370 command_runner.go:130] > # rdt_config_file = ""
	I1206 10:29:26.264922  522370 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1206 10:29:26.264926  522370 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1206 10:29:26.264932  522370 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1206 10:29:26.264936  522370 command_runner.go:130] > # separate_pull_cgroup = ""
	I1206 10:29:26.264946  522370 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1206 10:29:26.264954  522370 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1206 10:29:26.264958  522370 command_runner.go:130] > # will be added.
	I1206 10:29:26.264966  522370 command_runner.go:130] > # default_capabilities = [
	I1206 10:29:26.264970  522370 command_runner.go:130] > # 	"CHOWN",
	I1206 10:29:26.264974  522370 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1206 10:29:26.264986  522370 command_runner.go:130] > # 	"FSETID",
	I1206 10:29:26.264990  522370 command_runner.go:130] > # 	"FOWNER",
	I1206 10:29:26.264993  522370 command_runner.go:130] > # 	"SETGID",
	I1206 10:29:26.264996  522370 command_runner.go:130] > # 	"SETUID",
	I1206 10:29:26.265019  522370 command_runner.go:130] > # 	"SETPCAP",
	I1206 10:29:26.265029  522370 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1206 10:29:26.265035  522370 command_runner.go:130] > # 	"KILL",
	I1206 10:29:26.265038  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265046  522370 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1206 10:29:26.265056  522370 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1206 10:29:26.265061  522370 command_runner.go:130] > # add_inheritable_capabilities = false
	I1206 10:29:26.265069  522370 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1206 10:29:26.265075  522370 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 10:29:26.265088  522370 command_runner.go:130] > default_sysctls = [
	I1206 10:29:26.265093  522370 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1206 10:29:26.265096  522370 command_runner.go:130] > ]
	I1206 10:29:26.265101  522370 command_runner.go:130] > # List of devices on the host that a
	I1206 10:29:26.265110  522370 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1206 10:29:26.265114  522370 command_runner.go:130] > # allowed_devices = [
	I1206 10:29:26.265118  522370 command_runner.go:130] > # 	"/dev/fuse",
	I1206 10:29:26.265123  522370 command_runner.go:130] > # 	"/dev/net/tun",
	I1206 10:29:26.265127  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265134  522370 command_runner.go:130] > # List of additional devices. specified as
	I1206 10:29:26.265142  522370 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1206 10:29:26.265150  522370 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1206 10:29:26.265156  522370 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 10:29:26.265160  522370 command_runner.go:130] > # additional_devices = [
	I1206 10:29:26.265164  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265169  522370 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1206 10:29:26.265179  522370 command_runner.go:130] > # cdi_spec_dirs = [
	I1206 10:29:26.265184  522370 command_runner.go:130] > # 	"/etc/cdi",
	I1206 10:29:26.265188  522370 command_runner.go:130] > # 	"/var/run/cdi",
	I1206 10:29:26.265194  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265200  522370 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1206 10:29:26.265206  522370 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1206 10:29:26.265213  522370 command_runner.go:130] > # Defaults to false.
	I1206 10:29:26.265218  522370 command_runner.go:130] > # device_ownership_from_security_context = false
	I1206 10:29:26.265225  522370 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1206 10:29:26.265233  522370 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1206 10:29:26.265237  522370 command_runner.go:130] > # hooks_dir = [
	I1206 10:29:26.265245  522370 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1206 10:29:26.265248  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265264  522370 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1206 10:29:26.265271  522370 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1206 10:29:26.265277  522370 command_runner.go:130] > # its default mounts from the following two files:
	I1206 10:29:26.265282  522370 command_runner.go:130] > #
	I1206 10:29:26.265293  522370 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1206 10:29:26.265302  522370 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1206 10:29:26.265309  522370 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1206 10:29:26.265312  522370 command_runner.go:130] > #
	I1206 10:29:26.265319  522370 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1206 10:29:26.265333  522370 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1206 10:29:26.265340  522370 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1206 10:29:26.265345  522370 command_runner.go:130] > #      only add mounts it finds in this file.
	I1206 10:29:26.265351  522370 command_runner.go:130] > #
	I1206 10:29:26.265355  522370 command_runner.go:130] > # default_mounts_file = ""
	I1206 10:29:26.265360  522370 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1206 10:29:26.265367  522370 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1206 10:29:26.265371  522370 command_runner.go:130] > # pids_limit = -1
	I1206 10:29:26.265378  522370 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1206 10:29:26.265386  522370 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1206 10:29:26.265392  522370 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1206 10:29:26.265403  522370 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1206 10:29:26.265407  522370 command_runner.go:130] > # log_size_max = -1
	I1206 10:29:26.265416  522370 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1206 10:29:26.265423  522370 command_runner.go:130] > # log_to_journald = false
	I1206 10:29:26.265431  522370 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1206 10:29:26.265437  522370 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1206 10:29:26.265448  522370 command_runner.go:130] > # Path to directory for container attach sockets.
	I1206 10:29:26.265453  522370 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1206 10:29:26.265458  522370 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1206 10:29:26.265464  522370 command_runner.go:130] > # bind_mount_prefix = ""
	I1206 10:29:26.265470  522370 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1206 10:29:26.265476  522370 command_runner.go:130] > # read_only = false
	I1206 10:29:26.265482  522370 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1206 10:29:26.265491  522370 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1206 10:29:26.265495  522370 command_runner.go:130] > # live configuration reload.
	I1206 10:29:26.265508  522370 command_runner.go:130] > # log_level = "info"
	I1206 10:29:26.265514  522370 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1206 10:29:26.265523  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.265529  522370 command_runner.go:130] > # log_filter = ""
	I1206 10:29:26.265536  522370 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1206 10:29:26.265542  522370 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1206 10:29:26.265548  522370 command_runner.go:130] > # separated by comma.
	I1206 10:29:26.265557  522370 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:29:26.265564  522370 command_runner.go:130] > # uid_mappings = ""
	I1206 10:29:26.265570  522370 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1206 10:29:26.265578  522370 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1206 10:29:26.265586  522370 command_runner.go:130] > # separated by comma.
	I1206 10:29:26.265597  522370 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:29:26.265602  522370 command_runner.go:130] > # gid_mappings = ""
	I1206 10:29:26.265611  522370 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1206 10:29:26.265620  522370 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 10:29:26.265626  522370 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 10:29:26.265635  522370 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:29:26.265642  522370 command_runner.go:130] > # minimum_mappable_uid = -1
	I1206 10:29:26.265648  522370 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1206 10:29:26.265656  522370 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 10:29:26.265663  522370 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 10:29:26.265680  522370 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:29:26.265684  522370 command_runner.go:130] > # minimum_mappable_gid = -1
	I1206 10:29:26.265691  522370 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1206 10:29:26.265701  522370 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1206 10:29:26.265707  522370 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1206 10:29:26.265713  522370 command_runner.go:130] > # ctr_stop_timeout = 30
	I1206 10:29:26.265719  522370 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1206 10:29:26.265727  522370 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1206 10:29:26.265733  522370 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1206 10:29:26.265740  522370 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1206 10:29:26.265747  522370 command_runner.go:130] > # drop_infra_ctr = true
	I1206 10:29:26.265754  522370 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1206 10:29:26.265768  522370 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1206 10:29:26.265780  522370 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1206 10:29:26.265787  522370 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1206 10:29:26.265794  522370 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1206 10:29:26.265801  522370 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1206 10:29:26.265809  522370 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1206 10:29:26.265814  522370 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1206 10:29:26.265818  522370 command_runner.go:130] > # shared_cpuset = ""
	I1206 10:29:26.265824  522370 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1206 10:29:26.265832  522370 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1206 10:29:26.265838  522370 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1206 10:29:26.265846  522370 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1206 10:29:26.265857  522370 command_runner.go:130] > # pinns_path = ""
	I1206 10:29:26.265863  522370 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1206 10:29:26.265869  522370 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1206 10:29:26.265874  522370 command_runner.go:130] > # enable_criu_support = true
	I1206 10:29:26.265881  522370 command_runner.go:130] > # Enable/disable the generation of the container,
	I1206 10:29:26.265887  522370 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1206 10:29:26.265894  522370 command_runner.go:130] > # enable_pod_events = false
	I1206 10:29:26.265901  522370 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1206 10:29:26.265906  522370 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1206 10:29:26.265910  522370 command_runner.go:130] > # default_runtime = "crun"
	I1206 10:29:26.265915  522370 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1206 10:29:26.265925  522370 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1206 10:29:26.265945  522370 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1206 10:29:26.265951  522370 command_runner.go:130] > # creation as a file is not desired either.
	I1206 10:29:26.265960  522370 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1206 10:29:26.265970  522370 command_runner.go:130] > # the hostname is being managed dynamically.
	I1206 10:29:26.265974  522370 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1206 10:29:26.265977  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265984  522370 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1206 10:29:26.265993  522370 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1206 10:29:26.265999  522370 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1206 10:29:26.266004  522370 command_runner.go:130] > # Each entry in the table should follow the format:
	I1206 10:29:26.266011  522370 command_runner.go:130] > #
	I1206 10:29:26.266019  522370 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1206 10:29:26.266024  522370 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1206 10:29:26.266030  522370 command_runner.go:130] > # runtime_type = "oci"
	I1206 10:29:26.266035  522370 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1206 10:29:26.266042  522370 command_runner.go:130] > # inherit_default_runtime = false
	I1206 10:29:26.266047  522370 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1206 10:29:26.266059  522370 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1206 10:29:26.266065  522370 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1206 10:29:26.266068  522370 command_runner.go:130] > # monitor_env = []
	I1206 10:29:26.266080  522370 command_runner.go:130] > # privileged_without_host_devices = false
	I1206 10:29:26.266084  522370 command_runner.go:130] > # allowed_annotations = []
	I1206 10:29:26.266090  522370 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1206 10:29:26.266094  522370 command_runner.go:130] > # no_sync_log = false
	I1206 10:29:26.266098  522370 command_runner.go:130] > # default_annotations = {}
	I1206 10:29:26.266105  522370 command_runner.go:130] > # stream_websockets = false
	I1206 10:29:26.266112  522370 command_runner.go:130] > # seccomp_profile = ""
	I1206 10:29:26.266145  522370 command_runner.go:130] > # Where:
	I1206 10:29:26.266155  522370 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1206 10:29:26.266162  522370 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1206 10:29:26.266168  522370 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1206 10:29:26.266182  522370 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1206 10:29:26.266186  522370 command_runner.go:130] > #   in $PATH.
	I1206 10:29:26.266192  522370 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1206 10:29:26.266199  522370 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1206 10:29:26.266206  522370 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1206 10:29:26.266212  522370 command_runner.go:130] > #   state.
	I1206 10:29:26.266218  522370 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1206 10:29:26.266224  522370 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1206 10:29:26.266232  522370 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1206 10:29:26.266239  522370 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1206 10:29:26.266247  522370 command_runner.go:130] > #   the values from the default runtime on load time.
	I1206 10:29:26.266254  522370 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1206 10:29:26.266265  522370 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1206 10:29:26.266275  522370 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1206 10:29:26.266283  522370 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1206 10:29:26.266287  522370 command_runner.go:130] > #   The currently recognized values are:
	I1206 10:29:26.266294  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1206 10:29:26.266304  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1206 10:29:26.266315  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1206 10:29:26.266324  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1206 10:29:26.266332  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1206 10:29:26.266339  522370 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1206 10:29:26.266348  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1206 10:29:26.266356  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1206 10:29:26.266368  522370 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1206 10:29:26.266375  522370 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1206 10:29:26.266382  522370 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1206 10:29:26.266388  522370 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1206 10:29:26.266394  522370 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1206 10:29:26.266410  522370 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1206 10:29:26.266417  522370 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1206 10:29:26.266425  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1206 10:29:26.266435  522370 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1206 10:29:26.266440  522370 command_runner.go:130] > #   deprecated option "conmon".
	I1206 10:29:26.266447  522370 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1206 10:29:26.266455  522370 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1206 10:29:26.266463  522370 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1206 10:29:26.266467  522370 command_runner.go:130] > #   should be moved to the container's cgroup
	I1206 10:29:26.266475  522370 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1206 10:29:26.266479  522370 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1206 10:29:26.266489  522370 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1206 10:29:26.266501  522370 command_runner.go:130] > #   conmon-rs by using:
	I1206 10:29:26.266510  522370 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1206 10:29:26.266520  522370 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1206 10:29:26.266531  522370 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1206 10:29:26.266542  522370 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1206 10:29:26.266552  522370 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1206 10:29:26.266559  522370 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1206 10:29:26.266571  522370 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1206 10:29:26.266585  522370 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1206 10:29:26.266593  522370 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1206 10:29:26.266603  522370 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1206 10:29:26.266610  522370 command_runner.go:130] > #   when a machine crash happens.
	I1206 10:29:26.266617  522370 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1206 10:29:26.266625  522370 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1206 10:29:26.266636  522370 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1206 10:29:26.266641  522370 command_runner.go:130] > #   seccomp profile for the runtime.
	I1206 10:29:26.266647  522370 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1206 10:29:26.266656  522370 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1206 10:29:26.266660  522370 command_runner.go:130] > #
	I1206 10:29:26.266665  522370 command_runner.go:130] > # Using the seccomp notifier feature:
	I1206 10:29:26.266675  522370 command_runner.go:130] > #
	I1206 10:29:26.266682  522370 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1206 10:29:26.266689  522370 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1206 10:29:26.266694  522370 command_runner.go:130] > #
	I1206 10:29:26.266701  522370 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1206 10:29:26.266708  522370 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1206 10:29:26.266711  522370 command_runner.go:130] > #
	I1206 10:29:26.266718  522370 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1206 10:29:26.266723  522370 command_runner.go:130] > # feature.
	I1206 10:29:26.266726  522370 command_runner.go:130] > #
	I1206 10:29:26.266732  522370 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1206 10:29:26.266739  522370 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1206 10:29:26.266747  522370 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1206 10:29:26.266754  522370 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1206 10:29:26.266763  522370 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1206 10:29:26.266768  522370 command_runner.go:130] > #
	I1206 10:29:26.266774  522370 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1206 10:29:26.266786  522370 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1206 10:29:26.266792  522370 command_runner.go:130] > #
	I1206 10:29:26.266800  522370 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1206 10:29:26.266806  522370 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1206 10:29:26.266809  522370 command_runner.go:130] > #
	I1206 10:29:26.266815  522370 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1206 10:29:26.266825  522370 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1206 10:29:26.266831  522370 command_runner.go:130] > # limitation.
	I1206 10:29:26.266835  522370 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1206 10:29:26.266848  522370 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1206 10:29:26.266853  522370 command_runner.go:130] > runtime_type = ""
	I1206 10:29:26.266856  522370 command_runner.go:130] > runtime_root = "/run/crun"
	I1206 10:29:26.266862  522370 command_runner.go:130] > inherit_default_runtime = false
	I1206 10:29:26.266868  522370 command_runner.go:130] > runtime_config_path = ""
	I1206 10:29:26.266873  522370 command_runner.go:130] > container_min_memory = ""
	I1206 10:29:26.266880  522370 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1206 10:29:26.266884  522370 command_runner.go:130] > monitor_cgroup = "pod"
	I1206 10:29:26.266889  522370 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 10:29:26.266892  522370 command_runner.go:130] > allowed_annotations = [
	I1206 10:29:26.266897  522370 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1206 10:29:26.266900  522370 command_runner.go:130] > ]
	I1206 10:29:26.266904  522370 command_runner.go:130] > privileged_without_host_devices = false
	I1206 10:29:26.266911  522370 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1206 10:29:26.266916  522370 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1206 10:29:26.266921  522370 command_runner.go:130] > runtime_type = ""
	I1206 10:29:26.266932  522370 command_runner.go:130] > runtime_root = "/run/runc"
	I1206 10:29:26.266939  522370 command_runner.go:130] > inherit_default_runtime = false
	I1206 10:29:26.266943  522370 command_runner.go:130] > runtime_config_path = ""
	I1206 10:29:26.266947  522370 command_runner.go:130] > container_min_memory = ""
	I1206 10:29:26.266952  522370 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1206 10:29:26.266961  522370 command_runner.go:130] > monitor_cgroup = "pod"
	I1206 10:29:26.266966  522370 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 10:29:26.266970  522370 command_runner.go:130] > privileged_without_host_devices = false
	I1206 10:29:26.266981  522370 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1206 10:29:26.266987  522370 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1206 10:29:26.266995  522370 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1206 10:29:26.267006  522370 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1206 10:29:26.267024  522370 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1206 10:29:26.267035  522370 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1206 10:29:26.267047  522370 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1206 10:29:26.267054  522370 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1206 10:29:26.267063  522370 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1206 10:29:26.267072  522370 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1206 10:29:26.267080  522370 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1206 10:29:26.267087  522370 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1206 10:29:26.267094  522370 command_runner.go:130] > # Example:
	I1206 10:29:26.267098  522370 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1206 10:29:26.267103  522370 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1206 10:29:26.267108  522370 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1206 10:29:26.267132  522370 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1206 10:29:26.267141  522370 command_runner.go:130] > # cpuset = "0-1"
	I1206 10:29:26.267145  522370 command_runner.go:130] > # cpushares = "5"
	I1206 10:29:26.267149  522370 command_runner.go:130] > # cpuquota = "1000"
	I1206 10:29:26.267152  522370 command_runner.go:130] > # cpuperiod = "100000"
	I1206 10:29:26.267156  522370 command_runner.go:130] > # cpulimit = "35"
	I1206 10:29:26.267159  522370 command_runner.go:130] > # Where:
	I1206 10:29:26.267165  522370 command_runner.go:130] > # The workload name is workload-type.
	I1206 10:29:26.267172  522370 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1206 10:29:26.267181  522370 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1206 10:29:26.267188  522370 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1206 10:29:26.267199  522370 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1206 10:29:26.267205  522370 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1206 10:29:26.267210  522370 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1206 10:29:26.267224  522370 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1206 10:29:26.267229  522370 command_runner.go:130] > # Default value is set to true
	I1206 10:29:26.267234  522370 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1206 10:29:26.267244  522370 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1206 10:29:26.267251  522370 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1206 10:29:26.267255  522370 command_runner.go:130] > # Default value is set to 'false'
	I1206 10:29:26.267260  522370 command_runner.go:130] > # disable_hostport_mapping = false
	I1206 10:29:26.267265  522370 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1206 10:29:26.267277  522370 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1206 10:29:26.267283  522370 command_runner.go:130] > # timezone = ""
	I1206 10:29:26.267290  522370 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1206 10:29:26.267293  522370 command_runner.go:130] > #
	I1206 10:29:26.267299  522370 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1206 10:29:26.267310  522370 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1206 10:29:26.267313  522370 command_runner.go:130] > [crio.image]
	I1206 10:29:26.267319  522370 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1206 10:29:26.267324  522370 command_runner.go:130] > # default_transport = "docker://"
	I1206 10:29:26.267332  522370 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1206 10:29:26.267339  522370 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1206 10:29:26.267343  522370 command_runner.go:130] > # global_auth_file = ""
	I1206 10:29:26.267351  522370 command_runner.go:130] > # The image used to instantiate infra containers.
	I1206 10:29:26.267359  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.267364  522370 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1206 10:29:26.267378  522370 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1206 10:29:26.267385  522370 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1206 10:29:26.267396  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.267401  522370 command_runner.go:130] > # pause_image_auth_file = ""
	I1206 10:29:26.267407  522370 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1206 10:29:26.267413  522370 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1206 10:29:26.267421  522370 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1206 10:29:26.267427  522370 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1206 10:29:26.267434  522370 command_runner.go:130] > # pause_command = "/pause"
	I1206 10:29:26.267440  522370 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1206 10:29:26.267447  522370 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1206 10:29:26.267455  522370 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1206 10:29:26.267461  522370 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1206 10:29:26.267471  522370 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1206 10:29:26.267480  522370 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1206 10:29:26.267484  522370 command_runner.go:130] > # pinned_images = [
	I1206 10:29:26.267488  522370 command_runner.go:130] > # ]
	I1206 10:29:26.267494  522370 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1206 10:29:26.267502  522370 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1206 10:29:26.267509  522370 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1206 10:29:26.267517  522370 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1206 10:29:26.267525  522370 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1206 10:29:26.267530  522370 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1206 10:29:26.267538  522370 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1206 10:29:26.267548  522370 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1206 10:29:26.267556  522370 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1206 10:29:26.267566  522370 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1206 10:29:26.267572  522370 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1206 10:29:26.267579  522370 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1206 10:29:26.267587  522370 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1206 10:29:26.267594  522370 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1206 10:29:26.267597  522370 command_runner.go:130] > # changing them here.
	I1206 10:29:26.267603  522370 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1206 10:29:26.267608  522370 command_runner.go:130] > # insecure_registries = [
	I1206 10:29:26.267613  522370 command_runner.go:130] > # ]
	I1206 10:29:26.267620  522370 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1206 10:29:26.267637  522370 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1206 10:29:26.267641  522370 command_runner.go:130] > # image_volumes = "mkdir"
	I1206 10:29:26.267646  522370 command_runner.go:130] > # Temporary directory to use for storing big files
	I1206 10:29:26.267671  522370 command_runner.go:130] > # big_files_temporary_dir = ""
	I1206 10:29:26.267678  522370 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1206 10:29:26.267687  522370 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1206 10:29:26.267699  522370 command_runner.go:130] > # auto_reload_registries = false
	I1206 10:29:26.267706  522370 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1206 10:29:26.267714  522370 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1206 10:29:26.267723  522370 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1206 10:29:26.267732  522370 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1206 10:29:26.267739  522370 command_runner.go:130] > # The mode of short name resolution.
	I1206 10:29:26.267746  522370 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1206 10:29:26.267753  522370 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1206 10:29:26.267758  522370 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1206 10:29:26.267766  522370 command_runner.go:130] > # short_name_mode = "enforcing"
	I1206 10:29:26.267775  522370 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1206 10:29:26.267781  522370 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1206 10:29:26.267788  522370 command_runner.go:130] > # oci_artifact_mount_support = true
	I1206 10:29:26.267795  522370 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1206 10:29:26.267798  522370 command_runner.go:130] > # CNI plugins.
	I1206 10:29:26.267802  522370 command_runner.go:130] > [crio.network]
	I1206 10:29:26.267808  522370 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1206 10:29:26.267816  522370 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1206 10:29:26.267820  522370 command_runner.go:130] > # cni_default_network = ""
	I1206 10:29:26.267826  522370 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1206 10:29:26.267836  522370 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1206 10:29:26.267842  522370 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1206 10:29:26.267845  522370 command_runner.go:130] > # plugin_dirs = [
	I1206 10:29:26.267853  522370 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1206 10:29:26.267856  522370 command_runner.go:130] > # ]
	I1206 10:29:26.267861  522370 command_runner.go:130] > # List of included pod metrics.
	I1206 10:29:26.267867  522370 command_runner.go:130] > # included_pod_metrics = [
	I1206 10:29:26.267870  522370 command_runner.go:130] > # ]
	I1206 10:29:26.267879  522370 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1206 10:29:26.267885  522370 command_runner.go:130] > [crio.metrics]
	I1206 10:29:26.267890  522370 command_runner.go:130] > # Globally enable or disable metrics support.
	I1206 10:29:26.267897  522370 command_runner.go:130] > # enable_metrics = false
	I1206 10:29:26.267902  522370 command_runner.go:130] > # Specify enabled metrics collectors.
	I1206 10:29:26.267906  522370 command_runner.go:130] > # Per default all metrics are enabled.
	I1206 10:29:26.267912  522370 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1206 10:29:26.267919  522370 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1206 10:29:26.267925  522370 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1206 10:29:26.267938  522370 command_runner.go:130] > # metrics_collectors = [
	I1206 10:29:26.267943  522370 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1206 10:29:26.267947  522370 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1206 10:29:26.267951  522370 command_runner.go:130] > # 	"containers_oom_total",
	I1206 10:29:26.267954  522370 command_runner.go:130] > # 	"processes_defunct",
	I1206 10:29:26.267958  522370 command_runner.go:130] > # 	"operations_total",
	I1206 10:29:26.267962  522370 command_runner.go:130] > # 	"operations_latency_seconds",
	I1206 10:29:26.267966  522370 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1206 10:29:26.267970  522370 command_runner.go:130] > # 	"operations_errors_total",
	I1206 10:29:26.267977  522370 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1206 10:29:26.267981  522370 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1206 10:29:26.267986  522370 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1206 10:29:26.267990  522370 command_runner.go:130] > # 	"image_pulls_success_total",
	I1206 10:29:26.267993  522370 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1206 10:29:26.267997  522370 command_runner.go:130] > # 	"containers_oom_count_total",
	I1206 10:29:26.268003  522370 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1206 10:29:26.268007  522370 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1206 10:29:26.268011  522370 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1206 10:29:26.268014  522370 command_runner.go:130] > # ]
	I1206 10:29:26.268020  522370 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1206 10:29:26.268024  522370 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1206 10:29:26.268029  522370 command_runner.go:130] > # The port on which the metrics server will listen.
	I1206 10:29:26.268032  522370 command_runner.go:130] > # metrics_port = 9090
	I1206 10:29:26.268037  522370 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1206 10:29:26.268041  522370 command_runner.go:130] > # metrics_socket = ""
	I1206 10:29:26.268046  522370 command_runner.go:130] > # The certificate for the secure metrics server.
	I1206 10:29:26.268052  522370 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1206 10:29:26.268061  522370 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1206 10:29:26.268070  522370 command_runner.go:130] > # certificate on any modification event.
	I1206 10:29:26.268074  522370 command_runner.go:130] > # metrics_cert = ""
	I1206 10:29:26.268079  522370 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1206 10:29:26.268086  522370 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1206 10:29:26.268090  522370 command_runner.go:130] > # metrics_key = ""
	I1206 10:29:26.268099  522370 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1206 10:29:26.268106  522370 command_runner.go:130] > [crio.tracing]
	I1206 10:29:26.268112  522370 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1206 10:29:26.268116  522370 command_runner.go:130] > # enable_tracing = false
	I1206 10:29:26.268121  522370 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1206 10:29:26.268127  522370 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1206 10:29:26.268135  522370 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1206 10:29:26.268143  522370 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1206 10:29:26.268147  522370 command_runner.go:130] > # CRI-O NRI configuration.
	I1206 10:29:26.268150  522370 command_runner.go:130] > [crio.nri]
	I1206 10:29:26.268155  522370 command_runner.go:130] > # Globally enable or disable NRI.
	I1206 10:29:26.268158  522370 command_runner.go:130] > # enable_nri = true
	I1206 10:29:26.268162  522370 command_runner.go:130] > # NRI socket to listen on.
	I1206 10:29:26.268166  522370 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1206 10:29:26.268170  522370 command_runner.go:130] > # NRI plugin directory to use.
	I1206 10:29:26.268174  522370 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1206 10:29:26.268181  522370 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1206 10:29:26.268187  522370 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1206 10:29:26.268195  522370 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1206 10:29:26.268252  522370 command_runner.go:130] > # nri_disable_connections = false
	I1206 10:29:26.268260  522370 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1206 10:29:26.268265  522370 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1206 10:29:26.268270  522370 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1206 10:29:26.268274  522370 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1206 10:29:26.268287  522370 command_runner.go:130] > # NRI default validator configuration.
	I1206 10:29:26.268294  522370 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1206 10:29:26.268307  522370 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1206 10:29:26.268312  522370 command_runner.go:130] > # can be restricted/rejected:
	I1206 10:29:26.268322  522370 command_runner.go:130] > # - OCI hook injection
	I1206 10:29:26.268327  522370 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1206 10:29:26.268333  522370 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1206 10:29:26.268340  522370 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1206 10:29:26.268344  522370 command_runner.go:130] > # - adjustment of linux namespaces
	I1206 10:29:26.268356  522370 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1206 10:29:26.268363  522370 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1206 10:29:26.268368  522370 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1206 10:29:26.268375  522370 command_runner.go:130] > #
	I1206 10:29:26.268380  522370 command_runner.go:130] > # [crio.nri.default_validator]
	I1206 10:29:26.268384  522370 command_runner.go:130] > # nri_enable_default_validator = false
	I1206 10:29:26.268397  522370 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1206 10:29:26.268403  522370 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1206 10:29:26.268408  522370 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1206 10:29:26.268416  522370 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1206 10:29:26.268421  522370 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1206 10:29:26.268425  522370 command_runner.go:130] > # nri_validator_required_plugins = [
	I1206 10:29:26.268431  522370 command_runner.go:130] > # ]
	I1206 10:29:26.268436  522370 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1206 10:29:26.268442  522370 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1206 10:29:26.268446  522370 command_runner.go:130] > [crio.stats]
	I1206 10:29:26.268454  522370 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1206 10:29:26.268465  522370 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1206 10:29:26.268469  522370 command_runner.go:130] > # stats_collection_period = 0
	I1206 10:29:26.268475  522370 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1206 10:29:26.268484  522370 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1206 10:29:26.268489  522370 command_runner.go:130] > # collection_period = 0
	I1206 10:29:26.268581  522370 cni.go:84] Creating CNI manager for ""
	I1206 10:29:26.268595  522370 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:29:26.268620  522370 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:29:26.268646  522370 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-123579 NodeName:functional-123579 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:29:26.268768  522370 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-123579"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:29:26.268849  522370 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:29:26.276198  522370 command_runner.go:130] > kubeadm
	I1206 10:29:26.276217  522370 command_runner.go:130] > kubectl
	I1206 10:29:26.276221  522370 command_runner.go:130] > kubelet
	I1206 10:29:26.277128  522370 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:29:26.277245  522370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:29:26.285085  522370 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:29:26.297894  522370 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:29:26.310811  522370 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1206 10:29:26.323875  522370 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:29:26.327560  522370 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1206 10:29:26.327877  522370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:29:26.463333  522370 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:29:27.181623  522370 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579 for IP: 192.168.49.2
	I1206 10:29:27.181646  522370 certs.go:195] generating shared ca certs ...
	I1206 10:29:27.181662  522370 certs.go:227] acquiring lock for ca certs: {Name:mk654f77abd8383620ce6ddae56f2a6a8c1d96d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:29:27.181794  522370 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key
	I1206 10:29:27.181841  522370 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key
	I1206 10:29:27.181855  522370 certs.go:257] generating profile certs ...
	I1206 10:29:27.181981  522370 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key
	I1206 10:29:27.182049  522370 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key.fda7c087
	I1206 10:29:27.182120  522370 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key
	I1206 10:29:27.182139  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 10:29:27.182178  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 10:29:27.182195  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 10:29:27.182206  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 10:29:27.182221  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 10:29:27.182231  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 10:29:27.182242  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 10:29:27.182252  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 10:29:27.182310  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem (1338 bytes)
	W1206 10:29:27.182343  522370 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068_empty.pem, impossibly tiny 0 bytes
	I1206 10:29:27.182351  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 10:29:27.182391  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:29:27.182420  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:29:27.182445  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem (1675 bytes)
	I1206 10:29:27.182502  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:29:27.182537  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.182553  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem -> /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.182567  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.183155  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:29:27.204776  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:29:27.223807  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:29:27.246828  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 10:29:27.269763  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:29:27.290536  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:29:27.308147  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:29:27.326269  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:29:27.344314  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:29:27.361949  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem --> /usr/share/ca-certificates/488068.pem (1338 bytes)
	I1206 10:29:27.379296  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /usr/share/ca-certificates/4880682.pem (1708 bytes)
	I1206 10:29:27.396825  522370 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:29:27.409539  522370 ssh_runner.go:195] Run: openssl version
	I1206 10:29:27.415501  522370 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1206 10:29:27.415885  522370 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.423483  522370 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488068.pem /etc/ssl/certs/488068.pem
	I1206 10:29:27.431381  522370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.435336  522370 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.435420  522370 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.435491  522370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.477997  522370 command_runner.go:130] > 51391683
	I1206 10:29:27.478450  522370 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:29:27.485910  522370 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.493199  522370 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4880682.pem /etc/ssl/certs/4880682.pem
	I1206 10:29:27.500533  522370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.504197  522370 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.504254  522370 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.504314  522370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.549795  522370 command_runner.go:130] > 3ec20f2e
	I1206 10:29:27.550294  522370 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:29:27.557856  522370 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.565301  522370 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:29:27.572772  522370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.576768  522370 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.576853  522370 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.576925  522370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.618106  522370 command_runner.go:130] > b5213941
	I1206 10:29:27.618536  522370 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:29:27.626130  522370 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:29:27.629702  522370 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:29:27.629728  522370 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1206 10:29:27.629736  522370 command_runner.go:130] > Device: 259,1	Inode: 3640487     Links: 1
	I1206 10:29:27.629742  522370 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:29:27.629749  522370 command_runner.go:130] > Access: 2025-12-06 10:25:18.913466133 +0000
	I1206 10:29:27.629754  522370 command_runner.go:130] > Modify: 2025-12-06 10:21:14.154593310 +0000
	I1206 10:29:27.629758  522370 command_runner.go:130] > Change: 2025-12-06 10:21:14.154593310 +0000
	I1206 10:29:27.629764  522370 command_runner.go:130] >  Birth: 2025-12-06 10:21:14.154593310 +0000
	I1206 10:29:27.629823  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:29:27.670498  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.670941  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:29:27.711871  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.712351  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:29:27.753204  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.753665  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:29:27.795554  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.796089  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:29:27.836809  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.837203  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:29:27.878291  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.878357  522370 kubeadm.go:401] StartCluster: {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:29:27.878433  522370 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:29:27.878503  522370 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:29:27.905835  522370 cri.go:89] found id: ""
	I1206 10:29:27.905910  522370 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:29:27.912750  522370 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 10:29:27.912773  522370 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 10:29:27.912780  522370 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 10:29:27.913690  522370 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:29:27.913706  522370 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:29:27.913783  522370 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:29:27.921335  522370 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:29:27.921755  522370 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-123579" does not appear in /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:27.921867  522370 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-484819/kubeconfig needs updating (will repair): [kubeconfig missing "functional-123579" cluster setting kubeconfig missing "functional-123579" context setting]
	I1206 10:29:27.922200  522370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/kubeconfig: {Name:mk884a72161ed5cd0cfdbffc4a21f277282d705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:29:27.922608  522370 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:27.922766  522370 kapi.go:59] client config for functional-123579: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key", CAFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:29:27.923311  522370 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 10:29:27.923332  522370 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 10:29:27.923338  522370 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 10:29:27.923344  522370 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 10:29:27.923348  522370 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 10:29:27.923710  522370 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:29:27.923805  522370 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1206 10:29:27.932172  522370 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1206 10:29:27.932206  522370 kubeadm.go:602] duration metric: took 18.493373ms to restartPrimaryControlPlane
	I1206 10:29:27.932216  522370 kubeadm.go:403] duration metric: took 53.86688ms to StartCluster
	I1206 10:29:27.932230  522370 settings.go:142] acquiring lock: {Name:mk7eec112652eae38dac4afce804445d9092bd29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:29:27.932300  522370 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:27.932906  522370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/kubeconfig: {Name:mk884a72161ed5cd0cfdbffc4a21f277282d705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:29:27.933111  522370 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:29:27.933400  522370 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:29:27.933457  522370 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:29:27.933598  522370 addons.go:70] Setting storage-provisioner=true in profile "functional-123579"
	I1206 10:29:27.933615  522370 addons.go:239] Setting addon storage-provisioner=true in "functional-123579"
	I1206 10:29:27.933640  522370 host.go:66] Checking if "functional-123579" exists ...
	I1206 10:29:27.933662  522370 addons.go:70] Setting default-storageclass=true in profile "functional-123579"
	I1206 10:29:27.933709  522370 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-123579"
	I1206 10:29:27.934067  522370 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:29:27.934105  522370 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:29:27.937180  522370 out.go:179] * Verifying Kubernetes components...
	I1206 10:29:27.943300  522370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:29:27.955394  522370 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:27.955630  522370 kapi.go:59] client config for functional-123579: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key", CAFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:29:27.955941  522370 addons.go:239] Setting addon default-storageclass=true in "functional-123579"
	I1206 10:29:27.955970  522370 host.go:66] Checking if "functional-123579" exists ...
	I1206 10:29:27.956408  522370 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:29:27.980014  522370 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:29:27.983923  522370 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:27.983954  522370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:29:27.984026  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:27.996144  522370 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:27.996165  522370 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:29:27.996228  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:28.024613  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:28.044906  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:28.158003  522370 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:29:28.171055  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:28.191069  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:28.930363  522370 node_ready.go:35] waiting up to 6m0s for node "functional-123579" to be "Ready" ...
	I1206 10:29:28.930490  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:28.930625  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:28.930666  522370 retry.go:31] will retry after 220.153302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:28.930749  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:28.930787  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:28.930813  522370 retry.go:31] will retry after 205.296978ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:28.930893  522370 type.go:168] "Request Body" body=""
	I1206 10:29:28.930961  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:28.931278  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:29.136761  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:29.151269  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:29.213820  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:29.217541  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.217581  522370 retry.go:31] will retry after 414.855546ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.235243  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:29.235363  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.235412  522370 retry.go:31] will retry after 542.074768ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.431607  522370 type.go:168] "Request Body" body=""
	I1206 10:29:29.431755  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:29.432098  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:29.633557  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:29.704871  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:29.715208  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.715276  522370 retry.go:31] will retry after 512.072151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.778572  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:29.842567  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:29.842631  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.842656  522370 retry.go:31] will retry after 453.896864ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.930817  522370 type.go:168] "Request Body" body=""
	I1206 10:29:29.930917  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:29.931386  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:30.227644  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:30.292361  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:30.292404  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:30.292441  522370 retry.go:31] will retry after 965.22043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:30.297573  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:30.354035  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:30.357760  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:30.357796  522370 retry.go:31] will retry after 830.21573ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:30.430970  522370 type.go:168] "Request Body" body=""
	I1206 10:29:30.431039  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:30.431358  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:30.930753  522370 type.go:168] "Request Body" body=""
	I1206 10:29:30.930859  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:30.931201  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:30.931272  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:31.188810  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:31.258540  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:31.280251  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:31.280382  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:31.280411  522370 retry.go:31] will retry after 670.25639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:31.331402  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:31.331517  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:31.331545  522370 retry.go:31] will retry after 1.065706699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:31.430665  522370 type.go:168] "Request Body" body=""
	I1206 10:29:31.430772  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:31.431166  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:31.930712  522370 type.go:168] "Request Body" body=""
	I1206 10:29:31.930893  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:31.931401  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:31.951563  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:32.028942  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:32.028998  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:32.029018  522370 retry.go:31] will retry after 2.122665166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:32.397466  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:32.431043  522370 type.go:168] "Request Body" body=""
	I1206 10:29:32.431193  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:32.431584  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:32.458856  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:32.458892  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:32.458911  522370 retry.go:31] will retry after 1.728877951s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:32.931628  522370 type.go:168] "Request Body" body=""
	I1206 10:29:32.931705  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:32.932104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:32.932161  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:33.430893  522370 type.go:168] "Request Body" body=""
	I1206 10:29:33.430960  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:33.431324  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:33.930780  522370 type.go:168] "Request Body" body=""
	I1206 10:29:33.930858  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:33.931279  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:34.152755  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:34.188350  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:34.249027  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:34.249069  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:34.249090  522370 retry.go:31] will retry after 3.684646027s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:34.294198  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:34.294244  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:34.294296  522370 retry.go:31] will retry after 1.427612825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:34.431504  522370 type.go:168] "Request Body" body=""
	I1206 10:29:34.431583  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:34.431952  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:34.930685  522370 type.go:168] "Request Body" body=""
	I1206 10:29:34.930753  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:34.931043  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:35.430737  522370 type.go:168] "Request Body" body=""
	I1206 10:29:35.430834  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:35.431191  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:35.431258  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:35.722778  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:35.786215  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:35.786258  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:35.786277  522370 retry.go:31] will retry after 5.772571648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:35.931559  522370 type.go:168] "Request Body" body=""
	I1206 10:29:35.931640  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:35.931966  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:36.431586  522370 type.go:168] "Request Body" body=""
	I1206 10:29:36.431654  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:36.431914  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:36.930676  522370 type.go:168] "Request Body" body=""
	I1206 10:29:36.930756  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:36.931086  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:37.430781  522370 type.go:168] "Request Body" body=""
	I1206 10:29:37.430858  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:37.431219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:37.931472  522370 type.go:168] "Request Body" body=""
	I1206 10:29:37.931560  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:37.931882  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:37.931937  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:37.934240  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:38.012005  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:38.012049  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:38.012071  522370 retry.go:31] will retry after 2.264254307s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:38.430647  522370 type.go:168] "Request Body" body=""
	I1206 10:29:38.430724  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:38.431052  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:38.930775  522370 type.go:168] "Request Body" body=""
	I1206 10:29:38.930848  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:38.931203  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:39.430809  522370 type.go:168] "Request Body" body=""
	I1206 10:29:39.430884  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:39.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:39.930814  522370 type.go:168] "Request Body" body=""
	I1206 10:29:39.930888  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:39.931197  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:40.276629  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:40.338233  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:40.338274  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:40.338294  522370 retry.go:31] will retry after 6.465617702s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:40.431489  522370 type.go:168] "Request Body" body=""
	I1206 10:29:40.431563  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:40.431893  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:40.431948  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:40.931681  522370 type.go:168] "Request Body" body=""
	I1206 10:29:40.931758  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:40.932017  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:41.430778  522370 type.go:168] "Request Body" body=""
	I1206 10:29:41.430862  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:41.431219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:41.559542  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:41.618815  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:41.618852  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:41.618871  522370 retry.go:31] will retry after 5.212992024s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:41.931382  522370 type.go:168] "Request Body" body=""
	I1206 10:29:41.931461  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:41.931787  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:42.431525  522370 type.go:168] "Request Body" body=""
	I1206 10:29:42.431601  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:42.431866  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:42.931428  522370 type.go:168] "Request Body" body=""
	I1206 10:29:42.931503  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:42.931826  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:42.931883  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:43.431618  522370 type.go:168] "Request Body" body=""
	I1206 10:29:43.431692  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:43.432027  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:43.931348  522370 type.go:168] "Request Body" body=""
	I1206 10:29:43.931423  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:43.931690  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:44.431562  522370 type.go:168] "Request Body" body=""
	I1206 10:29:44.431652  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:44.431999  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:44.930672  522370 type.go:168] "Request Body" body=""
	I1206 10:29:44.930749  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:44.931083  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:45.430826  522370 type.go:168] "Request Body" body=""
	I1206 10:29:45.430904  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:45.431191  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:45.431243  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:45.930931  522370 type.go:168] "Request Body" body=""
	I1206 10:29:45.931023  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:45.931426  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:46.430763  522370 type.go:168] "Request Body" body=""
	I1206 10:29:46.430842  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:46.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:46.804868  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:46.832399  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:46.865940  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:46.865975  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:46.865994  522370 retry.go:31] will retry after 4.982943882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:46.906567  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:46.906612  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:46.906632  522370 retry.go:31] will retry after 5.755281988s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:46.930748  522370 type.go:168] "Request Body" body=""
	I1206 10:29:46.930817  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:46.931156  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:47.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:29:47.430851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:47.431185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:47.931383  522370 type.go:168] "Request Body" body=""
	I1206 10:29:47.931460  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:47.931792  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:47.931843  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:48.431576  522370 type.go:168] "Request Body" body=""
	I1206 10:29:48.431652  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:48.431909  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:48.931675  522370 type.go:168] "Request Body" body=""
	I1206 10:29:48.931755  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:48.932083  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:49.430777  522370 type.go:168] "Request Body" body=""
	I1206 10:29:49.430862  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:49.431211  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:49.930899  522370 type.go:168] "Request Body" body=""
	I1206 10:29:49.930969  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:49.931292  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:50.430989  522370 type.go:168] "Request Body" body=""
	I1206 10:29:50.431065  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:50.431426  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:50.431484  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:50.930772  522370 type.go:168] "Request Body" body=""
	I1206 10:29:50.930857  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:50.931213  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:51.430766  522370 type.go:168] "Request Body" body=""
	I1206 10:29:51.430838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:51.431095  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:51.849751  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:51.909824  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:51.909861  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:51.909882  522370 retry.go:31] will retry after 17.161477779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:51.930951  522370 type.go:168] "Request Body" body=""
	I1206 10:29:51.931035  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:51.931342  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:52.431051  522370 type.go:168] "Request Body" body=""
	I1206 10:29:52.431146  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:52.431458  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:52.431512  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:52.663117  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:52.730608  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:52.730656  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:52.730678  522370 retry.go:31] will retry after 12.860735555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:52.931180  522370 type.go:168] "Request Body" body=""
	I1206 10:29:52.931254  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:52.931513  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:53.431586  522370 type.go:168] "Request Body" body=""
	I1206 10:29:53.431665  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:53.432017  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:53.930759  522370 type.go:168] "Request Body" body=""
	I1206 10:29:53.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:53.931169  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:54.430719  522370 type.go:168] "Request Body" body=""
	I1206 10:29:54.430787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:54.431095  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:54.930744  522370 type.go:168] "Request Body" body=""
	I1206 10:29:54.930824  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:54.931164  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:54.931216  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:55.430912  522370 type.go:168] "Request Body" body=""
	I1206 10:29:55.430990  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:55.431336  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:55.930734  522370 type.go:168] "Request Body" body=""
	I1206 10:29:55.930815  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:55.931104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:56.430752  522370 type.go:168] "Request Body" body=""
	I1206 10:29:56.430830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:56.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:56.930913  522370 type.go:168] "Request Body" body=""
	I1206 10:29:56.931011  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:56.931387  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:56.931449  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:57.430732  522370 type.go:168] "Request Body" body=""
	I1206 10:29:57.430809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:57.431149  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:57.931360  522370 type.go:168] "Request Body" body=""
	I1206 10:29:57.931442  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:57.931792  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:58.431399  522370 type.go:168] "Request Body" body=""
	I1206 10:29:58.431472  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:58.431799  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:58.931551  522370 type.go:168] "Request Body" body=""
	I1206 10:29:58.931619  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:58.931871  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:58.931909  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:59.431652  522370 type.go:168] "Request Body" body=""
	I1206 10:29:59.431735  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:59.432062  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:59.930743  522370 type.go:168] "Request Body" body=""
	I1206 10:29:59.930819  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:59.931185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:00.449420  522370 type.go:168] "Request Body" body=""
	I1206 10:30:00.449497  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:00.449815  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:00.931632  522370 type.go:168] "Request Body" body=""
	I1206 10:30:00.931721  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:00.932114  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:00.932186  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:01.430891  522370 type.go:168] "Request Body" body=""
	I1206 10:30:01.430971  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:01.431362  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:01.930910  522370 type.go:168] "Request Body" body=""
	I1206 10:30:01.930981  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:01.931281  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:02.430983  522370 type.go:168] "Request Body" body=""
	I1206 10:30:02.431111  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:02.431463  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:02.931309  522370 type.go:168] "Request Body" body=""
	I1206 10:30:02.931390  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:02.931736  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:03.431532  522370 type.go:168] "Request Body" body=""
	I1206 10:30:03.431608  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:03.431873  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:03.431923  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:03.931662  522370 type.go:168] "Request Body" body=""
	I1206 10:30:03.931740  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:03.932084  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:04.430758  522370 type.go:168] "Request Body" body=""
	I1206 10:30:04.430838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:04.431226  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:04.930979  522370 type.go:168] "Request Body" body=""
	I1206 10:30:04.931048  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:04.931324  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:05.430768  522370 type.go:168] "Request Body" body=""
	I1206 10:30:05.430842  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:05.431235  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:05.591568  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:05.650107  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:05.653722  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:05.653756  522370 retry.go:31] will retry after 16.31009922s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:05.931225  522370 type.go:168] "Request Body" body=""
	I1206 10:30:05.931303  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:05.931640  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:05.931697  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:06.431453  522370 type.go:168] "Request Body" body=""
	I1206 10:30:06.431523  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:06.431774  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:06.931557  522370 type.go:168] "Request Body" body=""
	I1206 10:30:06.931629  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:06.931951  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:07.431619  522370 type.go:168] "Request Body" body=""
	I1206 10:30:07.431700  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:07.432067  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:07.931280  522370 type.go:168] "Request Body" body=""
	I1206 10:30:07.931358  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:07.931625  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:08.431487  522370 type.go:168] "Request Body" body=""
	I1206 10:30:08.431561  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:08.431928  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:08.431989  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:08.930675  522370 type.go:168] "Request Body" body=""
	I1206 10:30:08.930751  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:08.931076  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:09.072554  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:09.131495  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:09.131531  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:09.131550  522370 retry.go:31] will retry after 16.873374267s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:09.430840  522370 type.go:168] "Request Body" body=""
	I1206 10:30:09.430908  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:09.431218  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:09.930794  522370 type.go:168] "Request Body" body=""
	I1206 10:30:09.930868  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:09.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:10.430728  522370 type.go:168] "Request Body" body=""
	I1206 10:30:10.430802  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:10.431168  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:10.930730  522370 type.go:168] "Request Body" body=""
	I1206 10:30:10.930805  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:10.931062  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:10.931111  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:11.430884  522370 type.go:168] "Request Body" body=""
	I1206 10:30:11.430959  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:11.431276  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:11.930807  522370 type.go:168] "Request Body" body=""
	I1206 10:30:11.930877  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:11.931199  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:12.430821  522370 type.go:168] "Request Body" body=""
	I1206 10:30:12.430897  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:12.431230  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:12.931320  522370 type.go:168] "Request Body" body=""
	I1206 10:30:12.931390  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:12.931738  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:12.931801  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:13.431588  522370 type.go:168] "Request Body" body=""
	I1206 10:30:13.431660  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:13.432007  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:13.930697  522370 type.go:168] "Request Body" body=""
	I1206 10:30:13.930795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:13.931074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:14.430876  522370 type.go:168] "Request Body" body=""
	I1206 10:30:14.430958  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:14.431286  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:14.930809  522370 type.go:168] "Request Body" body=""
	I1206 10:30:14.930888  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:14.931234  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:15.430953  522370 type.go:168] "Request Body" body=""
	I1206 10:30:15.431021  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:15.431299  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:15.431359  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:15.930760  522370 type.go:168] "Request Body" body=""
	I1206 10:30:15.930854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:15.931202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:16.430790  522370 type.go:168] "Request Body" body=""
	I1206 10:30:16.430862  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:16.431183  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:16.930736  522370 type.go:168] "Request Body" body=""
	I1206 10:30:16.930809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:16.931077  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:17.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:30:17.430824  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:17.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:17.931237  522370 type.go:168] "Request Body" body=""
	I1206 10:30:17.931314  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:17.931645  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:17.931700  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:18.431393  522370 type.go:168] "Request Body" body=""
	I1206 10:30:18.431479  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:18.431748  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:18.931581  522370 type.go:168] "Request Body" body=""
	I1206 10:30:18.931653  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:18.931971  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:19.430702  522370 type.go:168] "Request Body" body=""
	I1206 10:30:19.430780  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:19.431097  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:19.930811  522370 type.go:168] "Request Body" body=""
	I1206 10:30:19.930888  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:19.931178  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:20.430768  522370 type.go:168] "Request Body" body=""
	I1206 10:30:20.430839  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:20.431197  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:20.431259  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:20.930943  522370 type.go:168] "Request Body" body=""
	I1206 10:30:20.931019  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:20.931387  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:21.431075  522370 type.go:168] "Request Body" body=""
	I1206 10:30:21.431159  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:21.431476  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:21.930790  522370 type.go:168] "Request Body" body=""
	I1206 10:30:21.930867  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:21.931207  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:21.964425  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:22.031284  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:22.031334  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:22.031356  522370 retry.go:31] will retry after 35.791693435s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:22.430787  522370 type.go:168] "Request Body" body=""
	I1206 10:30:22.430867  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:22.431181  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:22.930968  522370 type.go:168] "Request Body" body=""
	I1206 10:30:22.931043  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:22.931326  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:22.931374  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:23.430789  522370 type.go:168] "Request Body" body=""
	I1206 10:30:23.430884  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:23.431214  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:23.930931  522370 type.go:168] "Request Body" body=""
	I1206 10:30:23.931004  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:23.931354  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:24.430922  522370 type.go:168] "Request Body" body=""
	I1206 10:30:24.430996  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:24.431280  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:24.930769  522370 type.go:168] "Request Body" body=""
	I1206 10:30:24.930844  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:24.931166  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:25.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:30:25.430829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:25.431168  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:25.431230  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:25.931005  522370 type.go:168] "Request Body" body=""
	I1206 10:30:25.931194  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:25.932226  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:26.005763  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:26.074782  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:26.074834  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:26.074855  522370 retry.go:31] will retry after 34.92165894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:26.431288  522370 type.go:168] "Request Body" body=""
	I1206 10:30:26.431390  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:26.431714  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:26.931353  522370 type.go:168] "Request Body" body=""
	I1206 10:30:26.931426  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:26.931758  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:27.431397  522370 type.go:168] "Request Body" body=""
	I1206 10:30:27.431473  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:27.431770  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:27.431821  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:27.931640  522370 type.go:168] "Request Body" body=""
	I1206 10:30:27.931715  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:27.932047  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:28.431697  522370 type.go:168] "Request Body" body=""
	I1206 10:30:28.431771  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:28.432103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:28.930727  522370 type.go:168] "Request Body" body=""
	I1206 10:30:28.930800  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:28.931097  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:29.430756  522370 type.go:168] "Request Body" body=""
	I1206 10:30:29.430856  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:29.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:29.930774  522370 type.go:168] "Request Body" body=""
	I1206 10:30:29.930850  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:29.931176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:29.931223  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:30.430841  522370 type.go:168] "Request Body" body=""
	I1206 10:30:30.430907  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:30.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:30.930749  522370 type.go:168] "Request Body" body=""
	I1206 10:30:30.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:30.931181  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:31.430916  522370 type.go:168] "Request Body" body=""
	I1206 10:30:31.431010  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:31.431428  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:31.931099  522370 type.go:168] "Request Body" body=""
	I1206 10:30:31.931194  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:31.931454  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:31.931504  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:32.431294  522370 type.go:168] "Request Body" body=""
	I1206 10:30:32.431377  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:32.431741  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:32.931507  522370 type.go:168] "Request Body" body=""
	I1206 10:30:32.931587  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:32.931910  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:33.431612  522370 type.go:168] "Request Body" body=""
	I1206 10:30:33.431689  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:33.431967  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:33.930699  522370 type.go:168] "Request Body" body=""
	I1206 10:30:33.930774  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:33.931115  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:34.430875  522370 type.go:168] "Request Body" body=""
	I1206 10:30:34.430956  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:34.431328  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:34.431399  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:34.930728  522370 type.go:168] "Request Body" body=""
	I1206 10:30:34.930826  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:34.931100  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:35.430769  522370 type.go:168] "Request Body" body=""
	I1206 10:30:35.430844  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:35.431198  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:35.930924  522370 type.go:168] "Request Body" body=""
	I1206 10:30:35.931010  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:35.931368  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:36.431048  522370 type.go:168] "Request Body" body=""
	I1206 10:30:36.431167  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:36.431482  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:36.431535  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:36.931272  522370 type.go:168] "Request Body" body=""
	I1206 10:30:36.931345  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:36.931668  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:37.431458  522370 type.go:168] "Request Body" body=""
	I1206 10:30:37.431553  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:37.431867  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:37.931612  522370 type.go:168] "Request Body" body=""
	I1206 10:30:37.931682  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:37.932028  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:38.430754  522370 type.go:168] "Request Body" body=""
	I1206 10:30:38.430831  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:38.431203  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:38.930759  522370 type.go:168] "Request Body" body=""
	I1206 10:30:38.930834  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:38.931173  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:38.931244  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:39.430719  522370 type.go:168] "Request Body" body=""
	I1206 10:30:39.430798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:39.431104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:39.930841  522370 type.go:168] "Request Body" body=""
	I1206 10:30:39.930938  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:39.931315  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:40.431028  522370 type.go:168] "Request Body" body=""
	I1206 10:30:40.431104  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:40.431481  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:40.931230  522370 type.go:168] "Request Body" body=""
	I1206 10:30:40.931298  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:40.931552  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:40.931592  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:41.431348  522370 type.go:168] "Request Body" body=""
	I1206 10:30:41.431446  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:41.431767  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:41.931566  522370 type.go:168] "Request Body" body=""
	I1206 10:30:41.931647  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:41.931976  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:42.431636  522370 type.go:168] "Request Body" body=""
	I1206 10:30:42.431716  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:42.431988  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:42.930987  522370 type.go:168] "Request Body" body=""
	I1206 10:30:42.931066  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:42.931431  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:43.431209  522370 type.go:168] "Request Body" body=""
	I1206 10:30:43.431287  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:43.431648  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:43.431703  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:43.931389  522370 type.go:168] "Request Body" body=""
	I1206 10:30:43.931457  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:43.931727  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:44.431509  522370 type.go:168] "Request Body" body=""
	I1206 10:30:44.431583  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:44.431898  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:44.930652  522370 type.go:168] "Request Body" body=""
	I1206 10:30:44.930726  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:44.931043  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:45.430750  522370 type.go:168] "Request Body" body=""
	I1206 10:30:45.430832  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:45.431185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:45.930735  522370 type.go:168] "Request Body" body=""
	I1206 10:30:45.930816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:45.931167  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:45.931245  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:46.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:30:46.430992  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:46.431364  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:46.930735  522370 type.go:168] "Request Body" body=""
	I1206 10:30:46.930830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:46.931154  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:47.430792  522370 type.go:168] "Request Body" body=""
	I1206 10:30:47.430873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:47.431273  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:47.931290  522370 type.go:168] "Request Body" body=""
	I1206 10:30:47.931389  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:47.931707  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:47.931764  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:48.431531  522370 type.go:168] "Request Body" body=""
	I1206 10:30:48.431600  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:48.431884  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:48.931635  522370 type.go:168] "Request Body" body=""
	I1206 10:30:48.931707  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:48.932051  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:49.430636  522370 type.go:168] "Request Body" body=""
	I1206 10:30:49.430720  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:49.431043  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:49.930721  522370 type.go:168] "Request Body" body=""
	I1206 10:30:49.930793  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:49.931074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:50.430687  522370 type.go:168] "Request Body" body=""
	I1206 10:30:50.430783  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:50.431076  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:50.431162  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:50.930764  522370 type.go:168] "Request Body" body=""
	I1206 10:30:50.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:50.931221  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.430755  522370 type.go:168] "Request Body" body=""
	I1206 10:30:51.430826  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:51.431099  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.930829  522370 type.go:168] "Request Body" body=""
	I1206 10:30:51.930912  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:51.931261  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:52.430981  522370 type.go:168] "Request Body" body=""
	I1206 10:30:52.431081  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:52.431382  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:52.431432  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:52.931312  522370 type.go:168] "Request Body" body=""
	I1206 10:30:52.931405  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:52.931664  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.430694  522370 type.go:168] "Request Body" body=""
	I1206 10:30:53.430779  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:53.431113  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.930852  522370 type.go:168] "Request Body" body=""
	I1206 10:30:53.930925  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:53.931259  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:54.430827  522370 type.go:168] "Request Body" body=""
	I1206 10:30:54.430913  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:54.431229  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:54.930765  522370 type.go:168] "Request Body" body=""
	I1206 10:30:54.930847  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:54.931199  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:54.931254  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:55.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:30:55.431006  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:55.431312  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:55.930988  522370 type.go:168] "Request Body" body=""
	I1206 10:30:55.931078  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:55.931370  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:56.430800  522370 type.go:168] "Request Body" body=""
	I1206 10:30:56.430873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:56.431230  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:56.930928  522370 type.go:168] "Request Body" body=""
	I1206 10:30:56.931021  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:56.931336  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:56.931382  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:57.430743  522370 type.go:168] "Request Body" body=""
	I1206 10:30:57.430812  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:57.431182  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:57.823985  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:57.887311  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:57.891368  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:57.891481  522370 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:30:57.930973  522370 type.go:168] "Request Body" body=""
	I1206 10:30:57.931045  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:57.931345  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:58.430767  522370 type.go:168] "Request Body" body=""
	I1206 10:30:58.430847  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:58.431185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:58.930711  522370 type.go:168] "Request Body" body=""
	I1206 10:30:58.930784  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:58.931072  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:59.430808  522370 type.go:168] "Request Body" body=""
	I1206 10:30:59.430894  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:59.431255  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:59.431320  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:59.930807  522370 type.go:168] "Request Body" body=""
	I1206 10:30:59.930882  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:59.931248  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:00.430992  522370 type.go:168] "Request Body" body=""
	I1206 10:31:00.431085  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:00.431404  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:00.930779  522370 type.go:168] "Request Body" body=""
	I1206 10:31:00.930858  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:00.931174  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:00.997513  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:01.064863  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:01.068488  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:01.068586  522370 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:31:01.073496  522370 out.go:179] * Enabled addons: 
	I1206 10:31:01.076263  522370 addons.go:530] duration metric: took 1m33.142805076s for enable addons: enabled=[]
	I1206 10:31:01.430965  522370 type.go:168] "Request Body" body=""
	I1206 10:31:01.431062  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:01.431429  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:01.431491  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:01.930728  522370 type.go:168] "Request Body" body=""
	I1206 10:31:01.930813  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:01.931075  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:02.430719  522370 type.go:168] "Request Body" body=""
	I1206 10:31:02.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:02.431170  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:02.931199  522370 type.go:168] "Request Body" body=""
	I1206 10:31:02.931311  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:02.931626  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:03.431408  522370 type.go:168] "Request Body" body=""
	I1206 10:31:03.431503  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:03.431775  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:03.431826  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:03.931635  522370 type.go:168] "Request Body" body=""
	I1206 10:31:03.931714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:03.932077  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:04.430812  522370 type.go:168] "Request Body" body=""
	I1206 10:31:04.430889  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:04.431222  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:04.930928  522370 type.go:168] "Request Body" body=""
	I1206 10:31:04.931001  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:04.931294  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:05.430732  522370 type.go:168] "Request Body" body=""
	I1206 10:31:05.430807  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:05.431205  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:05.930777  522370 type.go:168] "Request Body" body=""
	I1206 10:31:05.930859  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:05.931245  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:05.931317  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:06.430961  522370 type.go:168] "Request Body" body=""
	I1206 10:31:06.431031  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:06.431335  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:06.930771  522370 type.go:168] "Request Body" body=""
	I1206 10:31:06.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:06.931212  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:07.430742  522370 type.go:168] "Request Body" body=""
	I1206 10:31:07.430822  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:07.431109  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:07.931277  522370 type.go:168] "Request Body" body=""
	I1206 10:31:07.931353  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:07.931638  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:07.931679  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:08.431521  522370 type.go:168] "Request Body" body=""
	I1206 10:31:08.431597  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:08.431952  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:08.930701  522370 type.go:168] "Request Body" body=""
	I1206 10:31:08.930775  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:08.931170  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:09.430860  522370 type.go:168] "Request Body" body=""
	I1206 10:31:09.430943  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:09.431243  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:09.930955  522370 type.go:168] "Request Body" body=""
	I1206 10:31:09.931035  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:09.931420  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:10.430778  522370 type.go:168] "Request Body" body=""
	I1206 10:31:10.430853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:10.431208  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:10.431264  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:10.930903  522370 type.go:168] "Request Body" body=""
	I1206 10:31:10.930972  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:10.931257  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:11.430753  522370 type.go:168] "Request Body" body=""
	I1206 10:31:11.430831  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:11.431176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:11.930888  522370 type.go:168] "Request Body" body=""
	I1206 10:31:11.930965  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:11.931366  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:12.431043  522370 type.go:168] "Request Body" body=""
	I1206 10:31:12.431118  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:12.431399  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:12.431444  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:12.931357  522370 type.go:168] "Request Body" body=""
	I1206 10:31:12.931433  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:12.931800  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:13.431602  522370 type.go:168] "Request Body" body=""
	I1206 10:31:13.431680  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:13.432016  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:13.930772  522370 type.go:168] "Request Body" body=""
	I1206 10:31:13.930841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:13.931103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.430796  522370 type.go:168] "Request Body" body=""
	I1206 10:31:14.430893  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:14.431217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.930770  522370 type.go:168] "Request Body" body=""
	I1206 10:31:14.930849  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:14.931219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:14.931279  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:15.430782  522370 type.go:168] "Request Body" body=""
	I1206 10:31:15.430850  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:15.431157  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:15.930757  522370 type.go:168] "Request Body" body=""
	I1206 10:31:15.930829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:15.931193  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:16.430753  522370 type.go:168] "Request Body" body=""
	I1206 10:31:16.430830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:16.431177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:16.930737  522370 type.go:168] "Request Body" body=""
	I1206 10:31:16.930808  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:16.931093  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:17.430742  522370 type.go:168] "Request Body" body=""
	I1206 10:31:17.430824  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:17.431217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:17.431272  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:17.931342  522370 type.go:168] "Request Body" body=""
	I1206 10:31:17.931425  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:17.931778  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:18.431537  522370 type.go:168] "Request Body" body=""
	I1206 10:31:18.431605  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:18.431868  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:18.930645  522370 type.go:168] "Request Body" body=""
	I1206 10:31:18.930720  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:18.931093  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:19.430810  522370 type.go:168] "Request Body" body=""
	I1206 10:31:19.430884  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:19.431254  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:19.431307  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:19.930711  522370 type.go:168] "Request Body" body=""
	I1206 10:31:19.930781  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:19.931116  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:20.430790  522370 type.go:168] "Request Body" body=""
	I1206 10:31:20.430893  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:20.431290  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:20.931043  522370 type.go:168] "Request Body" body=""
	I1206 10:31:20.931148  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:20.931503  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:21.431268  522370 type.go:168] "Request Body" body=""
	I1206 10:31:21.431356  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:21.431682  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:21.431723  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:21.931490  522370 type.go:168] "Request Body" body=""
	I1206 10:31:21.931570  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:21.931895  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:22.431704  522370 type.go:168] "Request Body" body=""
	I1206 10:31:22.431783  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:22.432137  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:22.930934  522370 type.go:168] "Request Body" body=""
	I1206 10:31:22.931013  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:22.931330  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:23.430728  522370 type.go:168] "Request Body" body=""
	I1206 10:31:23.430800  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:23.431163  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:23.930907  522370 type.go:168] "Request Body" body=""
	I1206 10:31:23.931011  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:23.931347  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:23.931408  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:24.430723  522370 type.go:168] "Request Body" body=""
	I1206 10:31:24.430793  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:24.431100  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:24.930781  522370 type.go:168] "Request Body" body=""
	I1206 10:31:24.930881  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:24.931205  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:25.430719  522370 type.go:168] "Request Body" body=""
	I1206 10:31:25.430793  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:25.431146  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:25.930743  522370 type.go:168] "Request Body" body=""
	I1206 10:31:25.930825  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:25.931098  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:26.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:31:26.430853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:26.431230  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:26.431285  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:26.930800  522370 type.go:168] "Request Body" body=""
	I1206 10:31:26.930898  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:26.931198  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:27.431688  522370 type.go:168] "Request Body" body=""
	I1206 10:31:27.431783  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:27.432074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:27.931195  522370 type.go:168] "Request Body" body=""
	I1206 10:31:27.931291  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:27.931692  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:28.431526  522370 type.go:168] "Request Body" body=""
	I1206 10:31:28.431657  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:28.432017  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:28.432087  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:28.930685  522370 type.go:168] "Request Body" body=""
	I1206 10:31:28.930798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:28.931176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:29.430715  522370 type.go:168] "Request Body" body=""
	I1206 10:31:29.430787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:29.431113  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:29.930720  522370 type.go:168] "Request Body" body=""
	I1206 10:31:29.930795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:29.931147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:30.430735  522370 type.go:168] "Request Body" body=""
	I1206 10:31:30.430809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:30.431203  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:30.930763  522370 type.go:168] "Request Body" body=""
	I1206 10:31:30.930838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:30.931220  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:30.931276  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:31.430923  522370 type.go:168] "Request Body" body=""
	I1206 10:31:31.430999  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:31.431356  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:31.931034  522370 type.go:168] "Request Body" body=""
	I1206 10:31:31.931102  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:31.931394  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:32.430894  522370 type.go:168] "Request Body" body=""
	I1206 10:31:32.430974  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:32.431350  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:32.931206  522370 type.go:168] "Request Body" body=""
	I1206 10:31:32.931296  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:32.931626  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:32.931683  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:33.431202  522370 type.go:168] "Request Body" body=""
	I1206 10:31:33.431271  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:33.431607  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:33.931401  522370 type.go:168] "Request Body" body=""
	I1206 10:31:33.931476  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:33.931817  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:34.431625  522370 type.go:168] "Request Body" body=""
	I1206 10:31:34.431714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:34.432035  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:34.931669  522370 type.go:168] "Request Body" body=""
	I1206 10:31:34.931742  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:34.932009  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:34.932053  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:35.430771  522370 type.go:168] "Request Body" body=""
	I1206 10:31:35.430852  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:35.431237  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:35.930935  522370 type.go:168] "Request Body" body=""
	I1206 10:31:35.931012  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:35.931347  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:36.430721  522370 type.go:168] "Request Body" body=""
	I1206 10:31:36.430797  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:36.431104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:36.930741  522370 type.go:168] "Request Body" body=""
	I1206 10:31:36.930820  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:36.931208  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:37.430713  522370 type.go:168] "Request Body" body=""
	I1206 10:31:37.430790  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:37.431167  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:37.431222  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:37.931252  522370 type.go:168] "Request Body" body=""
	I1206 10:31:37.931330  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:37.931655  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:38.431472  522370 type.go:168] "Request Body" body=""
	I1206 10:31:38.431546  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:38.431863  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:38.930659  522370 type.go:168] "Request Body" body=""
	I1206 10:31:38.930734  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:38.931062  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:39.430764  522370 type.go:168] "Request Body" body=""
	I1206 10:31:39.430838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:39.431171  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:39.930872  522370 type.go:168] "Request Body" body=""
	I1206 10:31:39.931015  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:39.931393  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:39.931453  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:40.431186  522370 type.go:168] "Request Body" body=""
	I1206 10:31:40.431263  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:40.431606  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:40.931379  522370 type.go:168] "Request Body" body=""
	I1206 10:31:40.931446  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:40.931701  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:41.431485  522370 type.go:168] "Request Body" body=""
	I1206 10:31:41.431564  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:41.431887  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:41.930643  522370 type.go:168] "Request Body" body=""
	I1206 10:31:41.930718  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:41.931057  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:42.430753  522370 type.go:168] "Request Body" body=""
	I1206 10:31:42.430823  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:42.431171  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:42.431219  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:42.931185  522370 type.go:168] "Request Body" body=""
	I1206 10:31:42.931265  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:42.931600  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:43.431298  522370 type.go:168] "Request Body" body=""
	I1206 10:31:43.431370  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:43.431690  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:43.931472  522370 type.go:168] "Request Body" body=""
	I1206 10:31:43.931550  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:43.931859  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:44.431577  522370 type.go:168] "Request Body" body=""
	I1206 10:31:44.431700  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:44.432084  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:44.432138  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:44.930770  522370 type.go:168] "Request Body" body=""
	I1206 10:31:44.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:44.931206  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:45.430733  522370 type.go:168] "Request Body" body=""
	I1206 10:31:45.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:45.431161  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:45.930853  522370 type.go:168] "Request Body" body=""
	I1206 10:31:45.930932  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:45.931318  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:46.430766  522370 type.go:168] "Request Body" body=""
	I1206 10:31:46.430845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:46.431204  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:46.930747  522370 type.go:168] "Request Body" body=""
	I1206 10:31:46.930820  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:46.931099  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:46.931170  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:47.430770  522370 type.go:168] "Request Body" body=""
	I1206 10:31:47.430858  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:47.431194  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:47.931329  522370 type.go:168] "Request Body" body=""
	I1206 10:31:47.931412  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:47.931751  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:48.431557  522370 type.go:168] "Request Body" body=""
	I1206 10:31:48.431630  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:48.431921  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:48.930683  522370 type.go:168] "Request Body" body=""
	I1206 10:31:48.930756  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:48.931083  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:49.430810  522370 type.go:168] "Request Body" body=""
	I1206 10:31:49.430898  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:49.431254  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:49.431313  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:49.930720  522370 type.go:168] "Request Body" body=""
	I1206 10:31:49.930793  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:49.931110  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:50.430770  522370 type.go:168] "Request Body" body=""
	I1206 10:31:50.430874  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:50.431234  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:50.931041  522370 type.go:168] "Request Body" body=""
	I1206 10:31:50.931153  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:50.931493  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:51.431234  522370 type.go:168] "Request Body" body=""
	I1206 10:31:51.431312  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:51.431631  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:51.431691  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:51.931489  522370 type.go:168] "Request Body" body=""
	I1206 10:31:51.931580  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:51.931981  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:52.430704  522370 type.go:168] "Request Body" body=""
	I1206 10:31:52.430806  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:52.431144  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:52.930913  522370 type.go:168] "Request Body" body=""
	I1206 10:31:52.930987  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:52.931309  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:53.430741  522370 type.go:168] "Request Body" body=""
	I1206 10:31:53.430813  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:53.431186  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:53.930898  522370 type.go:168] "Request Body" body=""
	I1206 10:31:53.930988  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:53.931350  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:53.931408  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:54.431065  522370 type.go:168] "Request Body" body=""
	I1206 10:31:54.431152  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:54.431403  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:54.931103  522370 type.go:168] "Request Body" body=""
	I1206 10:31:54.931201  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:54.931542  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.431350  522370 type.go:168] "Request Body" body=""
	I1206 10:31:55.431428  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:55.431748  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.931464  522370 type.go:168] "Request Body" body=""
	I1206 10:31:55.931536  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:55.931792  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:55.931832  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:56.431629  522370 type.go:168] "Request Body" body=""
	I1206 10:31:56.431704  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:56.432065  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:56.930782  522370 type.go:168] "Request Body" body=""
	I1206 10:31:56.930863  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:56.931219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:57.430905  522370 type.go:168] "Request Body" body=""
	I1206 10:31:57.430978  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:57.431276  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:57.931572  522370 type.go:168] "Request Body" body=""
	I1206 10:31:57.931656  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:57.931998  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:57.932052  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:58.430762  522370 type.go:168] "Request Body" body=""
	I1206 10:31:58.430841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:58.431216  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:58.930737  522370 type.go:168] "Request Body" body=""
	I1206 10:31:58.930807  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:58.931055  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:59.430703  522370 type.go:168] "Request Body" body=""
	I1206 10:31:59.430788  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:59.431185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:59.930748  522370 type.go:168] "Request Body" body=""
	I1206 10:31:59.930832  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:59.931193  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:00.430923  522370 type.go:168] "Request Body" body=""
	I1206 10:32:00.431018  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:00.431383  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:00.431435  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:00.930749  522370 type.go:168] "Request Body" body=""
	I1206 10:32:00.930823  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:00.931167  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:01.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:32:01.430987  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:01.431290  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:01.930735  522370 type.go:168] "Request Body" body=""
	I1206 10:32:01.930846  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:01.931177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:02.430793  522370 type.go:168] "Request Body" body=""
	I1206 10:32:02.430870  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:02.431209  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:02.931198  522370 type.go:168] "Request Body" body=""
	I1206 10:32:02.931274  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:02.931612  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:02.931666  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:03.431269  522370 type.go:168] "Request Body" body=""
	I1206 10:32:03.431341  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:03.431598  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:03.931409  522370 type.go:168] "Request Body" body=""
	I1206 10:32:03.931493  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:03.931843  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:04.431512  522370 type.go:168] "Request Body" body=""
	I1206 10:32:04.431588  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:04.431937  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:04.930649  522370 type.go:168] "Request Body" body=""
	I1206 10:32:04.930727  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:04.930996  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:05.430715  522370 type.go:168] "Request Body" body=""
	I1206 10:32:05.430789  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:05.431147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:05.431201  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:05.930878  522370 type.go:168] "Request Body" body=""
	I1206 10:32:05.930961  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:05.931320  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:06.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:32:06.430798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:06.431112  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:06.930766  522370 type.go:168] "Request Body" body=""
	I1206 10:32:06.930839  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:06.931201  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:07.430760  522370 type.go:168] "Request Body" body=""
	I1206 10:32:07.430842  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:07.431197  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:07.431255  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:07.931421  522370 type.go:168] "Request Body" body=""
	I1206 10:32:07.931493  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:07.931819  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:08.431684  522370 type.go:168] "Request Body" body=""
	I1206 10:32:08.431770  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:08.432111  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:08.930828  522370 type.go:168] "Request Body" body=""
	I1206 10:32:08.930926  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:08.931327  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:09.430732  522370 type.go:168] "Request Body" body=""
	I1206 10:32:09.430804  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:09.431070  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:09.930755  522370 type.go:168] "Request Body" body=""
	I1206 10:32:09.930836  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:09.931203  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:09.931266  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:10.430768  522370 type.go:168] "Request Body" body=""
	I1206 10:32:10.430879  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:10.431217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:10.930889  522370 type.go:168] "Request Body" body=""
	I1206 10:32:10.930960  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:10.931259  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:11.430792  522370 type.go:168] "Request Body" body=""
	I1206 10:32:11.430872  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:11.431253  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:11.930964  522370 type.go:168] "Request Body" body=""
	I1206 10:32:11.931039  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:11.931369  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:11.931419  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:12.430849  522370 type.go:168] "Request Body" body=""
	I1206 10:32:12.430927  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:12.431323  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:12.931326  522370 type.go:168] "Request Body" body=""
	I1206 10:32:12.931399  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:12.931728  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:13.431494  522370 type.go:168] "Request Body" body=""
	I1206 10:32:13.431575  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:13.431906  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:13.931683  522370 type.go:168] "Request Body" body=""
	I1206 10:32:13.931761  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:13.932130  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:13.932175  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:14.430772  522370 type.go:168] "Request Body" body=""
	I1206 10:32:14.430846  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:14.431201  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:14.930793  522370 type.go:168] "Request Body" body=""
	I1206 10:32:14.930874  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:14.931260  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:15.430763  522370 type.go:168] "Request Body" body=""
	I1206 10:32:15.430896  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:15.431300  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:15.930804  522370 type.go:168] "Request Body" body=""
	I1206 10:32:15.930877  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:15.931264  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:16.431005  522370 type.go:168] "Request Body" body=""
	I1206 10:32:16.431079  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:16.431470  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:16.431521  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:16.931228  522370 type.go:168] "Request Body" body=""
	I1206 10:32:16.931295  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:16.931553  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:17.431420  522370 type.go:168] "Request Body" body=""
	I1206 10:32:17.431494  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:17.431814  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:17.930648  522370 type.go:168] "Request Body" body=""
	I1206 10:32:17.930727  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:17.931063  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:18.430755  522370 type.go:168] "Request Body" body=""
	I1206 10:32:18.430879  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:18.431264  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:18.930777  522370 type.go:168] "Request Body" body=""
	I1206 10:32:18.930861  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:18.931245  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:18.931300  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:19.430824  522370 type.go:168] "Request Body" body=""
	I1206 10:32:19.430907  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:19.431295  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:19.930976  522370 type.go:168] "Request Body" body=""
	I1206 10:32:19.931045  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:19.931324  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:20.431030  522370 type.go:168] "Request Body" body=""
	I1206 10:32:20.431116  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:20.431515  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:20.931305  522370 type.go:168] "Request Body" body=""
	I1206 10:32:20.931378  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:20.931713  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:20.931766  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:21.431535  522370 type.go:168] "Request Body" body=""
	I1206 10:32:21.431650  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:21.431909  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:21.931668  522370 type.go:168] "Request Body" body=""
	I1206 10:32:21.931751  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:21.932103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:22.430721  522370 type.go:168] "Request Body" body=""
	I1206 10:32:22.430809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:22.431161  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:22.931142  522370 type.go:168] "Request Body" body=""
	I1206 10:32:22.931211  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:22.931472  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:23.431308  522370 type.go:168] "Request Body" body=""
	I1206 10:32:23.431380  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:23.431717  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:23.431770  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:23.931606  522370 type.go:168] "Request Body" body=""
	I1206 10:32:23.931684  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:23.932028  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:24.430722  522370 type.go:168] "Request Body" body=""
	I1206 10:32:24.430852  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:24.431235  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:24.930776  522370 type.go:168] "Request Body" body=""
	I1206 10:32:24.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:24.931200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:25.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:32:25.430855  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:25.431194  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:25.930892  522370 type.go:168] "Request Body" body=""
	I1206 10:32:25.930959  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:25.931238  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:25.931278  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:26.430787  522370 type.go:168] "Request Body" body=""
	I1206 10:32:26.430873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:26.431231  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:26.930954  522370 type.go:168] "Request Body" body=""
	I1206 10:32:26.931033  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:26.931398  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:27.431111  522370 type.go:168] "Request Body" body=""
	I1206 10:32:27.431201  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:27.431504  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:27.931658  522370 type.go:168] "Request Body" body=""
	I1206 10:32:27.931732  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:27.932069  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:27.932132  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:28.430761  522370 type.go:168] "Request Body" body=""
	I1206 10:32:28.430838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:28.431176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:28.930711  522370 type.go:168] "Request Body" body=""
	I1206 10:32:28.930783  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:28.931095  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:29.430760  522370 type.go:168] "Request Body" body=""
	I1206 10:32:29.430841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:29.431191  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:29.930785  522370 type.go:168] "Request Body" body=""
	I1206 10:32:29.930863  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:29.931232  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:30.430912  522370 type.go:168] "Request Body" body=""
	I1206 10:32:30.430988  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:30.431300  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:30.431356  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:30.930756  522370 type.go:168] "Request Body" body=""
	I1206 10:32:30.930830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:30.931179  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:31.430752  522370 type.go:168] "Request Body" body=""
	I1206 10:32:31.430836  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:31.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:31.930917  522370 type.go:168] "Request Body" body=""
	I1206 10:32:31.930986  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:31.931271  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:32.430794  522370 type.go:168] "Request Body" body=""
	I1206 10:32:32.430881  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:32.431249  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:32.931263  522370 type.go:168] "Request Body" body=""
	I1206 10:32:32.931386  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:32.931723  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:32.931782  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:33.431496  522370 type.go:168] "Request Body" body=""
	I1206 10:32:33.431581  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:33.431932  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:33.931657  522370 type.go:168] "Request Body" body=""
	I1206 10:32:33.931736  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:33.932091  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:34.430713  522370 type.go:168] "Request Body" body=""
	I1206 10:32:34.430790  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:34.431152  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:34.930699  522370 type.go:168] "Request Body" body=""
	I1206 10:32:34.930768  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:34.931073  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:35.430758  522370 type.go:168] "Request Body" body=""
	I1206 10:32:35.430837  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:35.431193  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:35.431247  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:35.930738  522370 type.go:168] "Request Body" body=""
	I1206 10:32:35.930816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:35.931165  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:36.430720  522370 type.go:168] "Request Body" body=""
	I1206 10:32:36.430791  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:36.431113  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:36.930734  522370 type.go:168] "Request Body" body=""
	I1206 10:32:36.930816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:36.931108  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:37.430776  522370 type.go:168] "Request Body" body=""
	I1206 10:32:37.430857  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:37.431190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:37.931387  522370 type.go:168] "Request Body" body=""
	I1206 10:32:37.931455  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:37.931795  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:37.931855  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:38.431623  522370 type.go:168] "Request Body" body=""
	I1206 10:32:38.431705  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:38.432052  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:38.930772  522370 type.go:168] "Request Body" body=""
	I1206 10:32:38.930850  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:38.931196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:39.430728  522370 type.go:168] "Request Body" body=""
	I1206 10:32:39.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:39.431143  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:39.930742  522370 type.go:168] "Request Body" body=""
	I1206 10:32:39.930822  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:39.931187  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:40.430754  522370 type.go:168] "Request Body" body=""
	I1206 10:32:40.430831  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:40.431262  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:40.431317  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:40.930972  522370 type.go:168] "Request Body" body=""
	I1206 10:32:40.931048  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:40.931346  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:41.430760  522370 type.go:168] "Request Body" body=""
	I1206 10:32:41.430833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:41.431192  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:41.930757  522370 type.go:168] "Request Body" body=""
	I1206 10:32:41.930829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:41.931180  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:42.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:32:42.430816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:42.431140  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:42.931171  522370 type.go:168] "Request Body" body=""
	I1206 10:32:42.931246  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:42.931610  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:42.931666  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:43.431315  522370 type.go:168] "Request Body" body=""
	I1206 10:32:43.431391  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:43.431734  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:43.931465  522370 type.go:168] "Request Body" body=""
	I1206 10:32:43.931536  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:43.931803  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:44.431545  522370 type.go:168] "Request Body" body=""
	I1206 10:32:44.431622  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:44.431960  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:44.931639  522370 type.go:168] "Request Body" body=""
	I1206 10:32:44.931734  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:44.932055  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:44.932114  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:45.430772  522370 type.go:168] "Request Body" body=""
	I1206 10:32:45.430845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:45.431116  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:45.930776  522370 type.go:168] "Request Body" body=""
	I1206 10:32:45.930868  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:45.931291  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:46.430766  522370 type.go:168] "Request Body" body=""
	I1206 10:32:46.430841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:46.431191  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:46.930863  522370 type.go:168] "Request Body" body=""
	I1206 10:32:46.930930  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:46.931212  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:47.430796  522370 type.go:168] "Request Body" body=""
	I1206 10:32:47.430887  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:47.431295  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:47.431359  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:47.931474  522370 type.go:168] "Request Body" body=""
	I1206 10:32:47.931560  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:47.931907  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:48.431677  522370 type.go:168] "Request Body" body=""
	I1206 10:32:48.431748  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:48.432085  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:48.930788  522370 type.go:168] "Request Body" body=""
	I1206 10:32:48.930871  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:48.931291  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:49.430869  522370 type.go:168] "Request Body" body=""
	I1206 10:32:49.430950  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:49.431291  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:49.930725  522370 type.go:168] "Request Body" body=""
	I1206 10:32:49.930794  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:49.931082  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:49.931153  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:50.430853  522370 type.go:168] "Request Body" body=""
	I1206 10:32:50.430949  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:50.431284  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:50.930721  522370 type.go:168] "Request Body" body=""
	I1206 10:32:50.930802  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:50.931146  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:51.430715  522370 type.go:168] "Request Body" body=""
	I1206 10:32:51.430795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:51.431104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:51.930818  522370 type.go:168] "Request Body" body=""
	I1206 10:32:51.930895  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:51.931285  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:51.931368  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:52.431078  522370 type.go:168] "Request Body" body=""
	I1206 10:32:52.431180  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:52.431482  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:52.931409  522370 type.go:168] "Request Body" body=""
	I1206 10:32:52.931482  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:52.931752  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:53.431547  522370 type.go:168] "Request Body" body=""
	I1206 10:32:53.431624  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:53.431945  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:53.930683  522370 type.go:168] "Request Body" body=""
	I1206 10:32:53.930759  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:53.931085  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:54.430729  522370 type.go:168] "Request Body" body=""
	I1206 10:32:54.430803  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:54.431094  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:54.431169  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:54.930721  522370 type.go:168] "Request Body" body=""
	I1206 10:32:54.930796  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:54.931156  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:55.430745  522370 type.go:168] "Request Body" body=""
	I1206 10:32:55.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:55.431164  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:55.930849  522370 type.go:168] "Request Body" body=""
	I1206 10:32:55.930915  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:55.931210  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:56.430891  522370 type.go:168] "Request Body" body=""
	I1206 10:32:56.430970  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:56.431338  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:56.431397  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:56.930910  522370 type.go:168] "Request Body" body=""
	I1206 10:32:56.930994  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:56.931313  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:57.430984  522370 type.go:168] "Request Body" body=""
	I1206 10:32:57.431057  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:57.431352  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:57.931626  522370 type.go:168] "Request Body" body=""
	I1206 10:32:57.931699  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:57.932050  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:58.430670  522370 type.go:168] "Request Body" body=""
	I1206 10:32:58.430747  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:58.431102  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:58.930730  522370 type.go:168] "Request Body" body=""
	I1206 10:32:58.930798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:58.931062  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:58.931101  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:59.430796  522370 type.go:168] "Request Body" body=""
	I1206 10:32:59.430871  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:59.431207  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:59.930920  522370 type.go:168] "Request Body" body=""
	I1206 10:32:59.930996  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:59.931373  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:00.431073  522370 type.go:168] "Request Body" body=""
	I1206 10:33:00.431174  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:00.431454  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:00.931159  522370 type.go:168] "Request Body" body=""
	I1206 10:33:00.931233  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:00.931593  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:00.931646  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:01.431429  522370 type.go:168] "Request Body" body=""
	I1206 10:33:01.431506  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:01.431854  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:01.931651  522370 type.go:168] "Request Body" body=""
	I1206 10:33:01.931722  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:01.932003  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:02.430670  522370 type.go:168] "Request Body" body=""
	I1206 10:33:02.430745  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:02.431108  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:02.930915  522370 type.go:168] "Request Body" body=""
	I1206 10:33:02.930990  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:02.931336  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:03.431009  522370 type.go:168] "Request Body" body=""
	I1206 10:33:03.431081  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:03.431417  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:03.431470  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:03.930755  522370 type.go:168] "Request Body" body=""
	I1206 10:33:03.930829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:03.931200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:04.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:33:04.430822  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:04.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:04.930891  522370 type.go:168] "Request Body" body=""
	I1206 10:33:04.930967  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:04.931354  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:05.430778  522370 type.go:168] "Request Body" body=""
	I1206 10:33:05.430860  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:05.431250  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:05.930755  522370 type.go:168] "Request Body" body=""
	I1206 10:33:05.930835  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:05.931189  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:05.931249  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:06.430896  522370 type.go:168] "Request Body" body=""
	I1206 10:33:06.430973  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:06.431278  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:06.930733  522370 type.go:168] "Request Body" body=""
	I1206 10:33:06.930807  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:06.931165  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:07.430744  522370 type.go:168] "Request Body" body=""
	I1206 10:33:07.430825  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:07.431177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:07.931223  522370 type.go:168] "Request Body" body=""
	I1206 10:33:07.931292  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:07.931564  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:07.931604  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:08.431432  522370 type.go:168] "Request Body" body=""
	I1206 10:33:08.431521  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:08.431859  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:08.931644  522370 type.go:168] "Request Body" body=""
	I1206 10:33:08.931724  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:08.932093  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:09.430763  522370 type.go:168] "Request Body" body=""
	I1206 10:33:09.430862  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:09.431255  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:09.930767  522370 type.go:168] "Request Body" body=""
	I1206 10:33:09.930849  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:09.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:10.430945  522370 type.go:168] "Request Body" body=""
	I1206 10:33:10.431022  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:10.431384  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:10.431441  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:10.931100  522370 type.go:168] "Request Body" body=""
	I1206 10:33:10.931186  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:10.931443  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:11.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:33:11.430818  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:11.431167  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:11.930886  522370 type.go:168] "Request Body" body=""
	I1206 10:33:11.930967  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:11.931341  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:12.431022  522370 type.go:168] "Request Body" body=""
	I1206 10:33:12.431093  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:12.431430  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:12.431487  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:12.931422  522370 type.go:168] "Request Body" body=""
	I1206 10:33:12.931498  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:12.931813  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:13.431634  522370 type.go:168] "Request Body" body=""
	I1206 10:33:13.431707  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:13.432041  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:13.930727  522370 type.go:168] "Request Body" body=""
	I1206 10:33:13.930806  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:13.931116  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:14.430764  522370 type.go:168] "Request Body" body=""
	I1206 10:33:14.430843  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:14.431197  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:14.930912  522370 type.go:168] "Request Body" body=""
	I1206 10:33:14.930993  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:14.931381  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:14.931437  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:15.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:33:15.430795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:15.431103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:15.930758  522370 type.go:168] "Request Body" body=""
	I1206 10:33:15.930830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:15.931180  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:16.430880  522370 type.go:168] "Request Body" body=""
	I1206 10:33:16.430966  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:16.431327  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:16.930724  522370 type.go:168] "Request Body" body=""
	I1206 10:33:16.930789  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:16.931103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:17.430923  522370 type.go:168] "Request Body" body=""
	I1206 10:33:17.430996  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:17.431378  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:17.431433  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:17.931311  522370 type.go:168] "Request Body" body=""
	I1206 10:33:17.931390  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:17.931703  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:18.431499  522370 type.go:168] "Request Body" body=""
	I1206 10:33:18.431573  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:18.431859  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:18.931659  522370 type.go:168] "Request Body" body=""
	I1206 10:33:18.931728  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:18.932101  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:19.430669  522370 type.go:168] "Request Body" body=""
	I1206 10:33:19.430749  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:19.431091  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:19.930819  522370 type.go:168] "Request Body" body=""
	I1206 10:33:19.930896  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:19.931201  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:19.931264  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:20.430727  522370 type.go:168] "Request Body" body=""
	I1206 10:33:20.430804  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:20.431145  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:20.930747  522370 type.go:168] "Request Body" body=""
	I1206 10:33:20.930830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:20.931225  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:21.430895  522370 type.go:168] "Request Body" body=""
	I1206 10:33:21.430968  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:21.431276  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:21.930735  522370 type.go:168] "Request Body" body=""
	I1206 10:33:21.930814  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:21.931153  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:22.430742  522370 type.go:168] "Request Body" body=""
	I1206 10:33:22.430815  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:22.431176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:22.431236  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:22.930959  522370 type.go:168] "Request Body" body=""
	I1206 10:33:22.931032  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:22.931315  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:23.430982  522370 type.go:168] "Request Body" body=""
	I1206 10:33:23.431057  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:23.431412  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:23.931141  522370 type.go:168] "Request Body" body=""
	I1206 10:33:23.931222  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:23.931520  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:24.431230  522370 type.go:168] "Request Body" body=""
	I1206 10:33:24.431303  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:24.431559  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:24.431598  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:24.931419  522370 type.go:168] "Request Body" body=""
	I1206 10:33:24.931497  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:24.931798  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:25.431590  522370 type.go:168] "Request Body" body=""
	I1206 10:33:25.431664  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:25.432003  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:25.930717  522370 type.go:168] "Request Body" body=""
	I1206 10:33:25.930787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:25.931105  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:26.430730  522370 type.go:168] "Request Body" body=""
	I1206 10:33:26.430803  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:26.431170  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:26.930777  522370 type.go:168] "Request Body" body=""
	I1206 10:33:26.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:26.931184  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:26.931237  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:27.430726  522370 type.go:168] "Request Body" body=""
	I1206 10:33:27.430818  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:27.431145  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:27.931181  522370 type.go:168] "Request Body" body=""
	I1206 10:33:27.931266  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:27.931566  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:28.431438  522370 type.go:168] "Request Body" body=""
	I1206 10:33:28.431510  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:28.431869  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:28.931537  522370 type.go:168] "Request Body" body=""
	I1206 10:33:28.931618  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:28.931903  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:28.931960  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:29.430674  522370 type.go:168] "Request Body" body=""
	I1206 10:33:29.430755  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:29.431137  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:29.930914  522370 type.go:168] "Request Body" body=""
	I1206 10:33:29.930990  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:29.931351  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:30.431026  522370 type.go:168] "Request Body" body=""
	I1206 10:33:30.431102  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:30.431376  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:30.930781  522370 type.go:168] "Request Body" body=""
	I1206 10:33:30.930873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:30.931192  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:31.430878  522370 type.go:168] "Request Body" body=""
	I1206 10:33:31.430956  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:31.431307  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:31.431363  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:31.930818  522370 type.go:168] "Request Body" body=""
	I1206 10:33:31.930894  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:31.931174  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:32.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:33:32.430850  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:32.431192  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:32.931213  522370 type.go:168] "Request Body" body=""
	I1206 10:33:32.931287  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:32.931623  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:33.431374  522370 type.go:168] "Request Body" body=""
	I1206 10:33:33.431441  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:33.431690  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:33.431729  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:33.931533  522370 type.go:168] "Request Body" body=""
	I1206 10:33:33.931612  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:33.931952  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:34.430686  522370 type.go:168] "Request Body" body=""
	I1206 10:33:34.430769  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:34.431100  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:34.930721  522370 type.go:168] "Request Body" body=""
	I1206 10:33:34.930796  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:34.931111  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:35.430773  522370 type.go:168] "Request Body" body=""
	I1206 10:33:35.430854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:35.431209  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:35.930752  522370 type.go:168] "Request Body" body=""
	I1206 10:33:35.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:35.931211  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:35.931270  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:36.430716  522370 type.go:168] "Request Body" body=""
	I1206 10:33:36.430789  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:36.431117  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:36.930838  522370 type.go:168] "Request Body" body=""
	I1206 10:33:36.930915  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:36.931278  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:37.430762  522370 type.go:168] "Request Body" body=""
	I1206 10:33:37.430839  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:37.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:37.931227  522370 type.go:168] "Request Body" body=""
	I1206 10:33:37.931308  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:37.931579  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:37.931629  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:38.431327  522370 type.go:168] "Request Body" body=""
	I1206 10:33:38.431398  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:38.431755  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:38.931430  522370 type.go:168] "Request Body" body=""
	I1206 10:33:38.931512  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:38.931837  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:39.431622  522370 type.go:168] "Request Body" body=""
	I1206 10:33:39.431687  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:39.431948  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:39.930714  522370 type.go:168] "Request Body" body=""
	I1206 10:33:39.930788  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:39.931147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:40.430846  522370 type.go:168] "Request Body" body=""
	I1206 10:33:40.430923  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:40.431265  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:40.431320  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:40.930719  522370 type.go:168] "Request Body" body=""
	I1206 10:33:40.930795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:40.931103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:41.430879  522370 type.go:168] "Request Body" body=""
	I1206 10:33:41.430958  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:41.431368  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:41.931083  522370 type.go:168] "Request Body" body=""
	I1206 10:33:41.931178  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:41.931515  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:42.431226  522370 type.go:168] "Request Body" body=""
	I1206 10:33:42.431297  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:42.431581  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:42.431622  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:42.931519  522370 type.go:168] "Request Body" body=""
	I1206 10:33:42.931593  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:42.931924  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:43.431683  522370 type.go:168] "Request Body" body=""
	I1206 10:33:43.431760  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:43.432078  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:43.930714  522370 type.go:168] "Request Body" body=""
	I1206 10:33:43.930784  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:43.931091  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:44.430727  522370 type.go:168] "Request Body" body=""
	I1206 10:33:44.430805  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:44.431177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:44.930741  522370 type.go:168] "Request Body" body=""
	I1206 10:33:44.930820  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:44.931173  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:44.931227  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:45.430724  522370 type.go:168] "Request Body" body=""
	I1206 10:33:45.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:45.431154  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:45.930742  522370 type.go:168] "Request Body" body=""
	I1206 10:33:45.930816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:45.931177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:46.430876  522370 type.go:168] "Request Body" body=""
	I1206 10:33:46.430959  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:46.431313  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:46.930987  522370 type.go:168] "Request Body" body=""
	I1206 10:33:46.931061  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:46.931413  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:46.931474  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:47.430745  522370 type.go:168] "Request Body" body=""
	I1206 10:33:47.430826  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:47.431205  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:47.931381  522370 type.go:168] "Request Body" body=""
	I1206 10:33:47.931468  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:47.931814  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:48.431456  522370 type.go:168] "Request Body" body=""
	I1206 10:33:48.431530  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:48.431817  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:48.931583  522370 type.go:168] "Request Body" body=""
	I1206 10:33:48.931659  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:48.932002  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:48.932055  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:49.431685  522370 type.go:168] "Request Body" body=""
	I1206 10:33:49.431764  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:49.432103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:49.930780  522370 type.go:168] "Request Body" body=""
	I1206 10:33:49.930855  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:49.931113  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:50.430744  522370 type.go:168] "Request Body" body=""
	I1206 10:33:50.430816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:50.431162  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:50.930749  522370 type.go:168] "Request Body" body=""
	I1206 10:33:50.930827  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:50.931179  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:51.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:33:51.430805  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:51.431143  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:51.431197  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:51.930877  522370 type.go:168] "Request Body" body=""
	I1206 10:33:51.930958  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:51.931307  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:52.431057  522370 type.go:168] "Request Body" body=""
	I1206 10:33:52.431157  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:52.431503  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:52.931288  522370 type.go:168] "Request Body" body=""
	I1206 10:33:52.931355  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:52.931612  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:53.431346  522370 type.go:168] "Request Body" body=""
	I1206 10:33:53.431421  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:53.431742  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:53.431799  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:53.931572  522370 type.go:168] "Request Body" body=""
	I1206 10:33:53.931647  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:53.931997  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:54.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:33:54.430806  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:54.431078  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:54.930776  522370 type.go:168] "Request Body" body=""
	I1206 10:33:54.930849  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:54.931177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:55.430770  522370 type.go:168] "Request Body" body=""
	I1206 10:33:55.430844  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:55.431172  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:55.930729  522370 type.go:168] "Request Body" body=""
	I1206 10:33:55.930801  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:55.931082  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:55.931151  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:56.430895  522370 type.go:168] "Request Body" body=""
	I1206 10:33:56.430967  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:56.431320  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:56.931032  522370 type.go:168] "Request Body" body=""
	I1206 10:33:56.931110  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:56.931459  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:57.430829  522370 type.go:168] "Request Body" body=""
	I1206 10:33:57.430902  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:57.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:57.931275  522370 type.go:168] "Request Body" body=""
	I1206 10:33:57.931349  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:57.931687  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:57.931743  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:58.431516  522370 type.go:168] "Request Body" body=""
	I1206 10:33:58.431614  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:58.431939  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:58.930634  522370 type.go:168] "Request Body" body=""
	I1206 10:33:58.930706  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:58.930955  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:59.430684  522370 type.go:168] "Request Body" body=""
	I1206 10:33:59.430758  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:59.431050  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:59.930637  522370 type.go:168] "Request Body" body=""
	I1206 10:33:59.930735  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:59.931074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:00.430781  522370 type.go:168] "Request Body" body=""
	I1206 10:34:00.430869  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:00.431217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:00.431315  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:00.930729  522370 type.go:168] "Request Body" body=""
	I1206 10:34:00.930820  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:00.931148  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:01.430848  522370 type.go:168] "Request Body" body=""
	I1206 10:34:01.430922  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:01.431286  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:01.930713  522370 type.go:168] "Request Body" body=""
	I1206 10:34:01.930791  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:01.931110  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:02.430761  522370 type.go:168] "Request Body" body=""
	I1206 10:34:02.430835  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:02.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:02.931031  522370 type.go:168] "Request Body" body=""
	I1206 10:34:02.931109  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:02.931453  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:02.931514  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:03.430966  522370 type.go:168] "Request Body" body=""
	I1206 10:34:03.431062  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:03.431375  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:03.930731  522370 type.go:168] "Request Body" body=""
	I1206 10:34:03.930814  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:03.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:04.430751  522370 type.go:168] "Request Body" body=""
	I1206 10:34:04.430825  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:04.431168  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:04.930717  522370 type.go:168] "Request Body" body=""
	I1206 10:34:04.930787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:04.931097  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:05.430797  522370 type.go:168] "Request Body" body=""
	I1206 10:34:05.430873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:05.431234  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:05.431295  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:05.930980  522370 type.go:168] "Request Body" body=""
	I1206 10:34:05.931058  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:05.931414  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:06.430713  522370 type.go:168] "Request Body" body=""
	I1206 10:34:06.430787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:06.431089  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:06.930764  522370 type.go:168] "Request Body" body=""
	I1206 10:34:06.930844  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:06.931244  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:07.430820  522370 type.go:168] "Request Body" body=""
	I1206 10:34:07.430894  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:07.431251  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:07.931445  522370 type.go:168] "Request Body" body=""
	I1206 10:34:07.931516  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:07.931771  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:07.931812  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:08.431524  522370 type.go:168] "Request Body" body=""
	I1206 10:34:08.431601  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:08.431921  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:08.930678  522370 type.go:168] "Request Body" body=""
	I1206 10:34:08.930767  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:08.931174  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:09.430817  522370 type.go:168] "Request Body" body=""
	I1206 10:34:09.430892  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:09.431194  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:09.930925  522370 type.go:168] "Request Body" body=""
	I1206 10:34:09.931018  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:09.931371  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:10.430770  522370 type.go:168] "Request Body" body=""
	I1206 10:34:10.430853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:10.431202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:10.431255  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:10.930764  522370 type.go:168] "Request Body" body=""
	I1206 10:34:10.930831  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:10.931090  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:11.430809  522370 type.go:168] "Request Body" body=""
	I1206 10:34:11.430882  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:11.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:11.930776  522370 type.go:168] "Request Body" body=""
	I1206 10:34:11.930851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:11.931212  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:12.430751  522370 type.go:168] "Request Body" body=""
	I1206 10:34:12.430822  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:12.431076  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:12.930963  522370 type.go:168] "Request Body" body=""
	I1206 10:34:12.931034  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:12.931391  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:12.931447  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:13.430984  522370 type.go:168] "Request Body" body=""
	I1206 10:34:13.431059  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:13.431405  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:13.930730  522370 type.go:168] "Request Body" body=""
	I1206 10:34:13.930807  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:13.931082  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:14.430699  522370 type.go:168] "Request Body" body=""
	I1206 10:34:14.430785  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:14.431147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:14.930773  522370 type.go:168] "Request Body" body=""
	I1206 10:34:14.930855  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:14.931210  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:15.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:34:15.430808  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:15.431058  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:15.431101  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:15.930737  522370 type.go:168] "Request Body" body=""
	I1206 10:34:15.930809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:15.931163  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:16.430877  522370 type.go:168] "Request Body" body=""
	I1206 10:34:16.430949  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:16.431309  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:16.930711  522370 type.go:168] "Request Body" body=""
	I1206 10:34:16.930788  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:16.931088  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:17.430798  522370 type.go:168] "Request Body" body=""
	I1206 10:34:17.430879  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:17.431230  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:17.431288  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:17.931511  522370 type.go:168] "Request Body" body=""
	I1206 10:34:17.931612  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:17.931976  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:18.431590  522370 type.go:168] "Request Body" body=""
	I1206 10:34:18.431659  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:18.432004  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:18.930728  522370 type.go:168] "Request Body" body=""
	I1206 10:34:18.930808  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:18.931147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:19.430863  522370 type.go:168] "Request Body" body=""
	I1206 10:34:19.430939  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:19.431293  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:19.431346  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:19.930992  522370 type.go:168] "Request Body" body=""
	I1206 10:34:19.931064  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:19.931410  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:20.430778  522370 type.go:168] "Request Body" body=""
	I1206 10:34:20.430854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:20.431219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:20.931558  522370 type.go:168] "Request Body" body=""
	I1206 10:34:20.931639  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:20.931987  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:21.430701  522370 type.go:168] "Request Body" body=""
	I1206 10:34:21.430786  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:21.431147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:21.930753  522370 type.go:168] "Request Body" body=""
	I1206 10:34:21.930827  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:21.931172  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:21.931232  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:22.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:34:22.430999  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:22.431346  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:22.931270  522370 type.go:168] "Request Body" body=""
	I1206 10:34:22.931368  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:22.931817  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:23.431585  522370 type.go:168] "Request Body" body=""
	I1206 10:34:23.431659  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:23.431973  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:23.930685  522370 type.go:168] "Request Body" body=""
	I1206 10:34:23.930759  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:23.931087  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:24.430797  522370 type.go:168] "Request Body" body=""
	I1206 10:34:24.430872  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:24.431117  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:24.431176  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:24.930806  522370 type.go:168] "Request Body" body=""
	I1206 10:34:24.930882  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:24.931202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:25.430788  522370 type.go:168] "Request Body" body=""
	I1206 10:34:25.430861  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:25.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:25.930868  522370 type.go:168] "Request Body" body=""
	I1206 10:34:25.930939  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:25.931218  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:26.430758  522370 type.go:168] "Request Body" body=""
	I1206 10:34:26.430834  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:26.431213  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:26.431274  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:26.930768  522370 type.go:168] "Request Body" body=""
	I1206 10:34:26.930845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:26.931192  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:27.430884  522370 type.go:168] "Request Body" body=""
	I1206 10:34:27.430960  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:27.431252  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:27.931325  522370 type.go:168] "Request Body" body=""
	I1206 10:34:27.931408  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:27.931744  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:28.431435  522370 type.go:168] "Request Body" body=""
	I1206 10:34:28.431523  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:28.431850  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:28.431909  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:28.931644  522370 type.go:168] "Request Body" body=""
	I1206 10:34:28.931714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:28.931970  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:29.430721  522370 type.go:168] "Request Body" body=""
	I1206 10:34:29.430803  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:29.431141  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:29.930784  522370 type.go:168] "Request Body" body=""
	I1206 10:34:29.930859  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:29.931176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:30.430844  522370 type.go:168] "Request Body" body=""
	I1206 10:34:30.430919  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:30.431210  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:30.930768  522370 type.go:168] "Request Body" body=""
	I1206 10:34:30.930851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:30.931235  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:30.931295  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:31.430810  522370 type.go:168] "Request Body" body=""
	I1206 10:34:31.430887  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:31.431198  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:31.930745  522370 type.go:168] "Request Body" body=""
	I1206 10:34:31.930813  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:31.931077  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:32.430753  522370 type.go:168] "Request Body" body=""
	I1206 10:34:32.430840  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:32.431195  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:32.931075  522370 type.go:168] "Request Body" body=""
	I1206 10:34:32.931167  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:32.931468  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:32.931518  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:33.431100  522370 type.go:168] "Request Body" body=""
	I1206 10:34:33.431184  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:33.431485  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:33.930775  522370 type.go:168] "Request Body" body=""
	I1206 10:34:33.930855  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:33.931221  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:34.430796  522370 type.go:168] "Request Body" body=""
	I1206 10:34:34.430877  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:34.431210  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:34.930739  522370 type.go:168] "Request Body" body=""
	I1206 10:34:34.930818  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:34.931162  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:35.430773  522370 type.go:168] "Request Body" body=""
	I1206 10:34:35.430856  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:35.431214  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:35.431268  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:35.930868  522370 type.go:168] "Request Body" body=""
	I1206 10:34:35.930944  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:35.931315  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:36.430720  522370 type.go:168] "Request Body" body=""
	I1206 10:34:36.430791  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:36.431040  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:36.930739  522370 type.go:168] "Request Body" body=""
	I1206 10:34:36.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:36.931195  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:37.430910  522370 type.go:168] "Request Body" body=""
	I1206 10:34:37.430986  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:37.431301  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:37.431348  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:37.931302  522370 type.go:168] "Request Body" body=""
	I1206 10:34:37.931371  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:37.931629  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:38.431530  522370 type.go:168] "Request Body" body=""
	I1206 10:34:38.431619  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:38.431930  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:38.930656  522370 type.go:168] "Request Body" body=""
	I1206 10:34:38.930736  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:38.931104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:39.430791  522370 type.go:168] "Request Body" body=""
	I1206 10:34:39.430869  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:39.431157  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:39.930904  522370 type.go:168] "Request Body" body=""
	I1206 10:34:39.930984  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:39.931350  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:39.931412  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:40.431091  522370 type.go:168] "Request Body" body=""
	I1206 10:34:40.431191  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:40.431534  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:40.931277  522370 type.go:168] "Request Body" body=""
	I1206 10:34:40.931349  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:40.931605  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:41.431406  522370 type.go:168] "Request Body" body=""
	I1206 10:34:41.431517  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:41.431838  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:41.931609  522370 type.go:168] "Request Body" body=""
	I1206 10:34:41.931696  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:41.932047  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:41.932102  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:42.430748  522370 type.go:168] "Request Body" body=""
	I1206 10:34:42.430824  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:42.431103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:42.931215  522370 type.go:168] "Request Body" body=""
	I1206 10:34:42.931317  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:42.931648  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:43.431450  522370 type.go:168] "Request Body" body=""
	I1206 10:34:43.431526  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:43.431858  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:43.931579  522370 type.go:168] "Request Body" body=""
	I1206 10:34:43.931659  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:43.931991  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:44.431656  522370 type.go:168] "Request Body" body=""
	I1206 10:34:44.431730  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:44.432129  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:44.432185  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:44.930734  522370 type.go:168] "Request Body" body=""
	I1206 10:34:44.930810  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:44.931202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:45.430889  522370 type.go:168] "Request Body" body=""
	I1206 10:34:45.430961  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:45.431255  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:45.930943  522370 type.go:168] "Request Body" body=""
	I1206 10:34:45.931026  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:45.931431  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:46.430744  522370 type.go:168] "Request Body" body=""
	I1206 10:34:46.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:46.431156  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:46.930832  522370 type.go:168] "Request Body" body=""
	I1206 10:34:46.930896  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:46.931177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:46.931219  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:47.430865  522370 type.go:168] "Request Body" body=""
	I1206 10:34:47.430941  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:47.431318  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:47.931392  522370 type.go:168] "Request Body" body=""
	I1206 10:34:47.931469  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:47.931802  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:48.431602  522370 type.go:168] "Request Body" body=""
	I1206 10:34:48.431696  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:48.432026  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:48.930775  522370 type.go:168] "Request Body" body=""
	I1206 10:34:48.930851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:48.931294  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:48.931353  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:49.431025  522370 type.go:168] "Request Body" body=""
	I1206 10:34:49.431108  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:49.431448  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:49.930724  522370 type.go:168] "Request Body" body=""
	I1206 10:34:49.930802  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:49.931116  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:50.430791  522370 type.go:168] "Request Body" body=""
	I1206 10:34:50.430867  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:50.431248  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:50.930784  522370 type.go:168] "Request Body" body=""
	I1206 10:34:50.930864  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:50.931205  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:51.430733  522370 type.go:168] "Request Body" body=""
	I1206 10:34:51.430811  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:51.431080  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:51.431150  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:51.930837  522370 type.go:168] "Request Body" body=""
	I1206 10:34:51.930930  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:51.931324  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:52.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:34:52.430851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:52.431202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:52.931267  522370 type.go:168] "Request Body" body=""
	I1206 10:34:52.931348  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:52.931664  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:53.431501  522370 type.go:168] "Request Body" body=""
	I1206 10:34:53.431595  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:53.431957  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:53.432013  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:53.931658  522370 type.go:168] "Request Body" body=""
	I1206 10:34:53.931738  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:53.932077  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:54.430749  522370 type.go:168] "Request Body" body=""
	I1206 10:34:54.430871  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:54.431247  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:54.930761  522370 type.go:168] "Request Body" body=""
	I1206 10:34:54.930837  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:54.931206  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:55.430922  522370 type.go:168] "Request Body" body=""
	I1206 10:34:55.431013  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:55.431352  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:55.930722  522370 type.go:168] "Request Body" body=""
	I1206 10:34:55.930788  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:55.931160  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:55.931217  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:56.430917  522370 type.go:168] "Request Body" body=""
	I1206 10:34:56.430995  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:56.431296  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:56.930987  522370 type.go:168] "Request Body" body=""
	I1206 10:34:56.931062  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:56.931423  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:57.430960  522370 type.go:168] "Request Body" body=""
	I1206 10:34:57.431029  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:57.431303  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:57.931550  522370 type.go:168] "Request Body" body=""
	I1206 10:34:57.931631  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:57.931966  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:57.932029  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:58.430730  522370 type.go:168] "Request Body" body=""
	I1206 10:34:58.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:58.431155  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:58.930843  522370 type.go:168] "Request Body" body=""
	I1206 10:34:58.930914  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:58.931207  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:59.430875  522370 type.go:168] "Request Body" body=""
	I1206 10:34:59.430950  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:59.431266  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:59.930814  522370 type.go:168] "Request Body" body=""
	I1206 10:34:59.930906  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:59.931260  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:00.430976  522370 type.go:168] "Request Body" body=""
	I1206 10:35:00.431061  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:00.431541  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:00.431605  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:00.931369  522370 type.go:168] "Request Body" body=""
	I1206 10:35:00.931476  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:00.931758  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:01.431561  522370 type.go:168] "Request Body" body=""
	I1206 10:35:01.431652  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:01.432065  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:01.930651  522370 type.go:168] "Request Body" body=""
	I1206 10:35:01.930724  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:01.930990  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:02.430729  522370 type.go:168] "Request Body" body=""
	I1206 10:35:02.430828  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:02.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:02.931011  522370 type.go:168] "Request Body" body=""
	I1206 10:35:02.931089  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:02.931442  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:02.931498  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:03.430733  522370 type.go:168] "Request Body" body=""
	I1206 10:35:03.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:03.431059  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:03.930760  522370 type.go:168] "Request Body" body=""
	I1206 10:35:03.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:03.931180  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:04.430888  522370 type.go:168] "Request Body" body=""
	I1206 10:35:04.430974  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:04.431297  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:04.930734  522370 type.go:168] "Request Body" body=""
	I1206 10:35:04.930812  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:04.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:05.430766  522370 type.go:168] "Request Body" body=""
	I1206 10:35:05.430845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:05.431226  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:05.431281  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:05.930825  522370 type.go:168] "Request Body" body=""
	I1206 10:35:05.930901  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:05.931256  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:06.430727  522370 type.go:168] "Request Body" body=""
	I1206 10:35:06.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:06.431148  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:06.930753  522370 type.go:168] "Request Body" body=""
	I1206 10:35:06.930834  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:06.931217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:07.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:35:07.430991  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:07.431345  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:07.431402  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:07.931619  522370 type.go:168] "Request Body" body=""
	I1206 10:35:07.931687  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:07.931937  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:08.430638  522370 type.go:168] "Request Body" body=""
	I1206 10:35:08.430708  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:08.431059  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:08.930771  522370 type.go:168] "Request Body" body=""
	I1206 10:35:08.930854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:08.931232  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:09.430960  522370 type.go:168] "Request Body" body=""
	I1206 10:35:09.431028  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:09.431338  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:09.930769  522370 type.go:168] "Request Body" body=""
	I1206 10:35:09.930843  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:09.931199  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:09.931252  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:10.430779  522370 type.go:168] "Request Body" body=""
	I1206 10:35:10.430854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:10.431226  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:10.930761  522370 type.go:168] "Request Body" body=""
	I1206 10:35:10.930829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:10.931111  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:11.430901  522370 type.go:168] "Request Body" body=""
	I1206 10:35:11.430975  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:11.431323  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:11.930769  522370 type.go:168] "Request Body" body=""
	I1206 10:35:11.930846  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:11.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:12.430718  522370 type.go:168] "Request Body" body=""
	I1206 10:35:12.430798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:12.431146  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:12.431211  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:12.931230  522370 type.go:168] "Request Body" body=""
	I1206 10:35:12.931308  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:12.931636  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:13.431462  522370 type.go:168] "Request Body" body=""
	I1206 10:35:13.431538  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:13.431885  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:13.931641  522370 type.go:168] "Request Body" body=""
	I1206 10:35:13.931714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:13.931987  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:14.430767  522370 type.go:168] "Request Body" body=""
	I1206 10:35:14.430841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:14.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:14.431257  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:14.930975  522370 type.go:168] "Request Body" body=""
	I1206 10:35:14.931053  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:14.931466  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:15.431217  522370 type.go:168] "Request Body" body=""
	I1206 10:35:15.431297  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:15.431580  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:15.931377  522370 type.go:168] "Request Body" body=""
	I1206 10:35:15.931454  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:15.931796  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:16.431484  522370 type.go:168] "Request Body" body=""
	I1206 10:35:16.431559  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:16.431888  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:16.431945  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:16.931644  522370 type.go:168] "Request Body" body=""
	I1206 10:35:16.931713  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:16.931977  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:17.430728  522370 type.go:168] "Request Body" body=""
	I1206 10:35:17.430856  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:17.431208  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:17.931466  522370 type.go:168] "Request Body" body=""
	I1206 10:35:17.931549  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:17.931886  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:18.431642  522370 type.go:168] "Request Body" body=""
	I1206 10:35:18.431714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:18.431964  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:18.432006  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:18.930687  522370 type.go:168] "Request Body" body=""
	I1206 10:35:18.930760  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:18.931117  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:19.430852  522370 type.go:168] "Request Body" body=""
	I1206 10:35:19.430938  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:19.431325  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:19.930751  522370 type.go:168] "Request Body" body=""
	I1206 10:35:19.930852  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:19.931255  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:20.430723  522370 type.go:168] "Request Body" body=""
	I1206 10:35:20.430804  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:20.431177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:20.930767  522370 type.go:168] "Request Body" body=""
	I1206 10:35:20.930845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:20.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:20.931244  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:21.430727  522370 type.go:168] "Request Body" body=""
	I1206 10:35:21.430804  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:21.431059  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:21.930732  522370 type.go:168] "Request Body" body=""
	I1206 10:35:21.930815  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:21.931186  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:22.430734  522370 type.go:168] "Request Body" body=""
	I1206 10:35:22.430810  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:22.431194  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:22.931191  522370 type.go:168] "Request Body" body=""
	I1206 10:35:22.931266  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:22.931524  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:22.931567  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:23.431346  522370 type.go:168] "Request Body" body=""
	I1206 10:35:23.431424  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:23.431932  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:23.930769  522370 type.go:168] "Request Body" body=""
	I1206 10:35:23.930843  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:23.931196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:24.430741  522370 type.go:168] "Request Body" body=""
	I1206 10:35:24.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:24.431074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:24.930743  522370 type.go:168] "Request Body" body=""
	I1206 10:35:24.930825  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:24.931196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:25.430898  522370 type.go:168] "Request Body" body=""
	I1206 10:35:25.430975  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:25.431343  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:25.431399  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:25.931031  522370 type.go:168] "Request Body" body=""
	I1206 10:35:25.931103  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:25.931404  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:26.430767  522370 type.go:168] "Request Body" body=""
	I1206 10:35:26.430843  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:26.431170  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:26.930780  522370 type.go:168] "Request Body" body=""
	I1206 10:35:26.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:26.931215  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:27.430765  522370 type.go:168] "Request Body" body=""
	I1206 10:35:27.430836  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:27.431109  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:27.931322  522370 type.go:168] "Request Body" body=""
	I1206 10:35:27.931408  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:27.931759  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:27.931820  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:28.430752  522370 type.go:168] "Request Body" body=""
	I1206 10:35:28.430847  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:28.431179  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:28.930742  522370 type.go:168] "Request Body" body=""
	I1206 10:35:28.930795  522370 node_ready.go:38] duration metric: took 6m0.000265171s for node "functional-123579" to be "Ready" ...
	I1206 10:35:28.934235  522370 out.go:203] 
	W1206 10:35:28.937230  522370 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 10:35:28.937255  522370 out.go:285] * 
	W1206 10:35:28.939411  522370 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:35:28.942269  522370 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.948799439Z" level=info msg="Using the internal default seccomp profile"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.948873136Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.948927305Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.948983969Z" level=info msg="RDT not available in the host system"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.949060069Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.95001273Z" level=info msg="Conmon does support the --sync option"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.950102393Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.950167188Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.950967097Z" level=info msg="Conmon does support the --sync option"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.951048383Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.9513514Z" level=info msg="Updated default CNI network name to "
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.9523931Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/
hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_me
mory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir
= \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [cri
o.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.952804193Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.952869734Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.988235454Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.988272007Z" level=info msg="Starting seccomp notifier watcher"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.988323395Z" level=info msg="Create NRI interface"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.988426186Z" level=info msg="built-in NRI default validator is disabled"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.988433989Z" level=info msg="runtime interface created"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.988446223Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.988452434Z" level=info msg="runtime interface starting up..."
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.988458941Z" level=info msg="starting plugins..."
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.988472553Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.988537683Z" level=info msg="No systemd watchdog enabled"
	Dec 06 10:29:25 functional-123579 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:35:31.041585    8616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:35:31.042982    8616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:35:31.043991    8616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:35:31.045593    8616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:35:31.045941    8616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:20] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000983] FS-Cache: O-cookie d=000000005fa08aa9{9P.session} n=00000000effdd306
	[  +0.001108] FS-Cache: O-key=[10] '34323935383339353739'
	[  +0.000774] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001064] FS-Cache: N-cookie d=000000005fa08aa9{9P.session} n=00000000d1a54e80
	[  +0.001158] FS-Cache: N-key=[10] '34323935383339353739'
	[Dec 6 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.091742] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:17] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:18] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:35:31 up  3:18,  0 user,  load average: 0.12, 0.27, 0.82
	Linux functional-123579 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:35:28 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:35:28 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1137.
	Dec 06 10:35:28 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:28 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:28 functional-123579 kubelet[8505]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:28 functional-123579 kubelet[8505]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:29 functional-123579 kubelet[8505]: E1206 10:35:29.015665    8505 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:35:29 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:35:29 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:35:29 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1138.
	Dec 06 10:35:29 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:29 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:29 functional-123579 kubelet[8511]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:29 functional-123579 kubelet[8511]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:29 functional-123579 kubelet[8511]: E1206 10:35:29.751091    8511 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:35:29 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:35:29 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:35:30 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1139.
	Dec 06 10:35:30 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:30 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:30 functional-123579 kubelet[8531]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:30 functional-123579 kubelet[8531]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:30 functional-123579 kubelet[8531]: E1206 10:35:30.499205    8531 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:35:30 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:35:30 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579: exit status 2 (377.55738ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-123579" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-123579 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-123579 get po -A: exit status 1 (60.923301ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-123579 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-123579 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-123579 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-123579
helpers_test.go:243: (dbg) docker inspect functional-123579:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	        "Created": "2025-12-06T10:21:05.490589445Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 516908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:21:05.573219423Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hostname",
	        "HostsPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hosts",
	        "LogPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721-json.log",
	        "Name": "/functional-123579",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-123579:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-123579",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	                "LowerDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f-init/diff:/var/lib/docker/overlay2/cc06c0f1f442a7275dc247974ca9074508813cfb842de89bc5bb1dae1e824222/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-123579",
	                "Source": "/var/lib/docker/volumes/functional-123579/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-123579",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-123579",
	                "name.minikube.sigs.k8s.io": "functional-123579",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10921d51d4ec866d78853297249318b04ef864639c8e07349985c5733ba03a26",
	            "SandboxKey": "/var/run/docker/netns/10921d51d4ec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-123579": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:5b:29:c4:a4:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa75a7cb7ddfb7086d66f629904d681a84e2c9da78725396c4dc859cfc5aa536",
	                    "EndpointID": "eff9632b5a6c335169f4a61b3c9f1727c30b30183ac61ac9730ddb7b0d19cf24",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-123579",
	                        "86e8d3865f80"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579: exit status 2 (304.544139ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-123579 logs -n 25: (1.052268038s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-137526 ssh findmnt -T /mount1                                                                                                          │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ mount          │ -p functional-137526 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2137041211/001:/mount1 --alsologtostderr -v=1                                │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ mount          │ -p functional-137526 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2137041211/001:/mount2 --alsologtostderr -v=1                                │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ mount          │ -p functional-137526 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2137041211/001:/mount3 --alsologtostderr -v=1                                │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ start          │ -p functional-137526 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                         │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ ssh            │ functional-137526 ssh findmnt -T /mount1                                                                                                          │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ start          │ -p functional-137526 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                   │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ start          │ -p functional-137526 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                         │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ ssh            │ functional-137526 ssh findmnt -T /mount2                                                                                                          │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ dashboard      │ --url --port 36195 -p functional-137526 --alsologtostderr -v=1                                                                                    │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ ssh            │ functional-137526 ssh findmnt -T /mount3                                                                                                          │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ mount          │ -p functional-137526 --kill=true                                                                                                                  │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ update-context │ functional-137526 update-context --alsologtostderr -v=2                                                                                           │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ update-context │ functional-137526 update-context --alsologtostderr -v=2                                                                                           │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ update-context │ functional-137526 update-context --alsologtostderr -v=2                                                                                           │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image          │ functional-137526 image ls --format short --alsologtostderr                                                                                       │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image          │ functional-137526 image ls --format yaml --alsologtostderr                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ ssh            │ functional-137526 ssh pgrep buildkitd                                                                                                             │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ image          │ functional-137526 image build -t localhost/my-image:functional-137526 testdata/build --alsologtostderr                                            │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image          │ functional-137526 image ls --format json --alsologtostderr                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image          │ functional-137526 image ls --format table --alsologtostderr                                                                                       │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image          │ functional-137526 image ls                                                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ delete         │ -p functional-137526                                                                                                                              │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:21 UTC │
	│ start          │ -p functional-123579 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:21 UTC │                     │
	│ start          │ -p functional-123579 --alsologtostderr -v=8                                                                                                       │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:29 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:29:22
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:29:22.870980  522370 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:29:22.871170  522370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:22.871181  522370 out.go:374] Setting ErrFile to fd 2...
	I1206 10:29:22.871187  522370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:22.871464  522370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:29:22.871865  522370 out.go:368] Setting JSON to false
	I1206 10:29:22.872761  522370 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11514,"bootTime":1765005449,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:29:22.872829  522370 start.go:143] virtualization:  
	I1206 10:29:22.876360  522370 out.go:179] * [functional-123579] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:29:22.880135  522370 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:29:22.880243  522370 notify.go:221] Checking for updates...
	I1206 10:29:22.885979  522370 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:29:22.888900  522370 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:22.891673  522370 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:29:22.894419  522370 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:29:22.897199  522370 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:29:22.900505  522370 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:29:22.900663  522370 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:29:22.930035  522370 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:29:22.930154  522370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:29:22.994169  522370 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:29:22.985097483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:29:22.994270  522370 docker.go:319] overlay module found
	I1206 10:29:22.997336  522370 out.go:179] * Using the docker driver based on existing profile
	I1206 10:29:23.000134  522370 start.go:309] selected driver: docker
	I1206 10:29:23.000177  522370 start.go:927] validating driver "docker" against &{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:29:23.000290  522370 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:29:23.000407  522370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:29:23.064912  522370 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:29:23.055716934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:29:23.065339  522370 cni.go:84] Creating CNI manager for ""
	I1206 10:29:23.065406  522370 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:29:23.065455  522370 start.go:353] cluster config:
	{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:29:23.068684  522370 out.go:179] * Starting "functional-123579" primary control-plane node in "functional-123579" cluster
	I1206 10:29:23.071544  522370 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:29:23.074549  522370 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:29:23.077588  522370 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:29:23.077638  522370 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:29:23.077648  522370 cache.go:65] Caching tarball of preloaded images
	I1206 10:29:23.077715  522370 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:29:23.077742  522370 preload.go:238] Found /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:29:23.077753  522370 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:29:23.077861  522370 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/config.json ...
	I1206 10:29:23.100973  522370 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:29:23.100996  522370 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:29:23.101011  522370 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:29:23.101047  522370 start.go:360] acquireMachinesLock for functional-123579: {Name:mk35a9adf20f50a3c49b774a4ee092917f16cc66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:29:23.101106  522370 start.go:364] duration metric: took 36.569µs to acquireMachinesLock for "functional-123579"
	I1206 10:29:23.101131  522370 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:29:23.101140  522370 fix.go:54] fixHost starting: 
	I1206 10:29:23.101403  522370 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:29:23.120661  522370 fix.go:112] recreateIfNeeded on functional-123579: state=Running err=<nil>
	W1206 10:29:23.120697  522370 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:29:23.124123  522370 out.go:252] * Updating the running docker "functional-123579" container ...
	I1206 10:29:23.124169  522370 machine.go:94] provisionDockerMachine start ...
	I1206 10:29:23.124278  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:23.148209  522370 main.go:143] libmachine: Using SSH client type: native
	I1206 10:29:23.148655  522370 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:29:23.148670  522370 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:29:23.311217  522370 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:29:23.311246  522370 ubuntu.go:182] provisioning hostname "functional-123579"
	I1206 10:29:23.311337  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:23.330615  522370 main.go:143] libmachine: Using SSH client type: native
	I1206 10:29:23.330948  522370 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:29:23.330967  522370 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-123579 && echo "functional-123579" | sudo tee /etc/hostname
	I1206 10:29:23.492326  522370 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:29:23.492442  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:23.511425  522370 main.go:143] libmachine: Using SSH client type: native
	I1206 10:29:23.511745  522370 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:29:23.511767  522370 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-123579' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-123579/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-123579' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:29:23.663802  522370 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:29:23.663828  522370 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-484819/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-484819/.minikube}
	I1206 10:29:23.663852  522370 ubuntu.go:190] setting up certificates
	I1206 10:29:23.663862  522370 provision.go:84] configureAuth start
	I1206 10:29:23.663938  522370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:29:23.683626  522370 provision.go:143] copyHostCerts
	I1206 10:29:23.683677  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 10:29:23.683720  522370 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem, removing ...
	I1206 10:29:23.683732  522370 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 10:29:23.683811  522370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem (1082 bytes)
	I1206 10:29:23.683905  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 10:29:23.683927  522370 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem, removing ...
	I1206 10:29:23.683935  522370 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 10:29:23.683965  522370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem (1123 bytes)
	I1206 10:29:23.684012  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 10:29:23.684032  522370 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem, removing ...
	I1206 10:29:23.684040  522370 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 10:29:23.684065  522370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem (1675 bytes)
	I1206 10:29:23.684117  522370 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem org=jenkins.functional-123579 san=[127.0.0.1 192.168.49.2 functional-123579 localhost minikube]
	I1206 10:29:23.851072  522370 provision.go:177] copyRemoteCerts
	I1206 10:29:23.851167  522370 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:29:23.851208  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:23.869258  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:23.976487  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 10:29:23.976551  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:29:23.994935  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 10:29:23.995001  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:29:24.028988  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 10:29:24.029065  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:29:24.047435  522370 provision.go:87] duration metric: took 383.548866ms to configureAuth
	I1206 10:29:24.047460  522370 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:29:24.047651  522370 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:29:24.047753  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.065906  522370 main.go:143] libmachine: Using SSH client type: native
	I1206 10:29:24.066279  522370 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:29:24.066304  522370 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:29:24.394899  522370 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:29:24.394922  522370 machine.go:97] duration metric: took 1.270744832s to provisionDockerMachine
	I1206 10:29:24.394933  522370 start.go:293] postStartSetup for "functional-123579" (driver="docker")
	I1206 10:29:24.394946  522370 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:29:24.395040  522370 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:29:24.395089  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.413037  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:24.518950  522370 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:29:24.522167  522370 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1206 10:29:24.522190  522370 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1206 10:29:24.522196  522370 command_runner.go:130] > VERSION_ID="12"
	I1206 10:29:24.522201  522370 command_runner.go:130] > VERSION="12 (bookworm)"
	I1206 10:29:24.522206  522370 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1206 10:29:24.522219  522370 command_runner.go:130] > ID=debian
	I1206 10:29:24.522224  522370 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1206 10:29:24.522228  522370 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1206 10:29:24.522234  522370 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1206 10:29:24.522273  522370 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:29:24.522296  522370 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:29:24.522307  522370 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/addons for local assets ...
	I1206 10:29:24.522366  522370 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/files for local assets ...
	I1206 10:29:24.522448  522370 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> 4880682.pem in /etc/ssl/certs
	I1206 10:29:24.522465  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> /etc/ssl/certs/4880682.pem
	I1206 10:29:24.522539  522370 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts -> hosts in /etc/test/nested/copy/488068
	I1206 10:29:24.522547  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts -> /etc/test/nested/copy/488068/hosts
	I1206 10:29:24.522590  522370 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488068
	I1206 10:29:24.529941  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:29:24.547406  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts --> /etc/test/nested/copy/488068/hosts (40 bytes)
	I1206 10:29:24.564885  522370 start.go:296] duration metric: took 169.937214ms for postStartSetup
	I1206 10:29:24.565009  522370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:29:24.565071  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.582051  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:24.684564  522370 command_runner.go:130] > 18%
	I1206 10:29:24.685308  522370 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:29:24.690194  522370 command_runner.go:130] > 161G
	I1206 10:29:24.690863  522370 fix.go:56] duration metric: took 1.589719046s for fixHost
	I1206 10:29:24.690882  522370 start.go:83] releasing machines lock for "functional-123579", held for 1.589762361s
	I1206 10:29:24.690959  522370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:29:24.710139  522370 ssh_runner.go:195] Run: cat /version.json
	I1206 10:29:24.710198  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.710437  522370 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:29:24.710491  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.744752  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:24.750995  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:24.850618  522370 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1206 10:29:24.850833  522370 ssh_runner.go:195] Run: systemctl --version
	I1206 10:29:24.941044  522370 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 10:29:24.943691  522370 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1206 10:29:24.943731  522370 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 10:29:24.943796  522370 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:29:24.982406  522370 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 10:29:24.986710  522370 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 10:29:24.986856  522370 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:29:24.986921  522370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:29:24.995206  522370 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:29:24.995230  522370 start.go:496] detecting cgroup driver to use...
	I1206 10:29:24.995260  522370 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:29:24.995314  522370 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:29:25.015488  522370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:29:25.029388  522370 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:29:25.029474  522370 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:29:25.044588  522370 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:29:25.057886  522370 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:29:25.175907  522370 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:29:25.297406  522370 docker.go:234] disabling docker service ...
	I1206 10:29:25.297502  522370 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:29:25.313940  522370 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:29:25.326948  522370 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:29:25.448237  522370 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:29:25.592886  522370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:29:25.605716  522370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:29:25.618765  522370 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1206 10:29:25.620045  522370 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:29:25.620120  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.628683  522370 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:29:25.628808  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.637855  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.646676  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.656251  522370 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:29:25.664395  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.673385  522370 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.681859  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.691317  522370 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:29:25.697883  522370 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 10:29:25.698954  522370 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:29:25.706470  522370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:29:25.835287  522370 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:29:25.994073  522370 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:29:25.994183  522370 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:29:25.998083  522370 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1206 10:29:25.998204  522370 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 10:29:25.998238  522370 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1206 10:29:25.998335  522370 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:29:25.998358  522370 command_runner.go:130] > Access: 2025-12-06 10:29:25.948140155 +0000
	I1206 10:29:25.998390  522370 command_runner.go:130] > Modify: 2025-12-06 10:29:25.948140155 +0000
	I1206 10:29:25.998420  522370 command_runner.go:130] > Change: 2025-12-06 10:29:25.948140155 +0000
	I1206 10:29:25.998437  522370 command_runner.go:130] >  Birth: -
	I1206 10:29:25.998473  522370 start.go:564] Will wait 60s for crictl version
	I1206 10:29:25.998553  522370 ssh_runner.go:195] Run: which crictl
	I1206 10:29:26.004847  522370 command_runner.go:130] > /usr/local/bin/crictl
	I1206 10:29:26.004981  522370 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:29:26.037391  522370 command_runner.go:130] > Version:  0.1.0
	I1206 10:29:26.037414  522370 command_runner.go:130] > RuntimeName:  cri-o
	I1206 10:29:26.037421  522370 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1206 10:29:26.037427  522370 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 10:29:26.037438  522370 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:29:26.037548  522370 ssh_runner.go:195] Run: crio --version
	I1206 10:29:26.065733  522370 command_runner.go:130] > crio version 1.34.3
	I1206 10:29:26.065769  522370 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1206 10:29:26.065793  522370 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1206 10:29:26.065805  522370 command_runner.go:130] >    GitTreeState:   dirty
	I1206 10:29:26.065811  522370 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1206 10:29:26.065822  522370 command_runner.go:130] >    GoVersion:      go1.24.6
	I1206 10:29:26.065827  522370 command_runner.go:130] >    Compiler:       gc
	I1206 10:29:26.065832  522370 command_runner.go:130] >    Platform:       linux/arm64
	I1206 10:29:26.065840  522370 command_runner.go:130] >    Linkmode:       static
	I1206 10:29:26.065845  522370 command_runner.go:130] >    BuildTags:
	I1206 10:29:26.065852  522370 command_runner.go:130] >      static
	I1206 10:29:26.065886  522370 command_runner.go:130] >      netgo
	I1206 10:29:26.065897  522370 command_runner.go:130] >      osusergo
	I1206 10:29:26.065918  522370 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1206 10:29:26.065928  522370 command_runner.go:130] >      seccomp
	I1206 10:29:26.065932  522370 command_runner.go:130] >      apparmor
	I1206 10:29:26.065941  522370 command_runner.go:130] >      selinux
	I1206 10:29:26.065946  522370 command_runner.go:130] >    LDFlags:          unknown
	I1206 10:29:26.065954  522370 command_runner.go:130] >    SeccompEnabled:   true
	I1206 10:29:26.065958  522370 command_runner.go:130] >    AppArmorEnabled:  false
	I1206 10:29:26.068082  522370 ssh_runner.go:195] Run: crio --version
	I1206 10:29:26.095375  522370 command_runner.go:130] > crio version 1.34.3
	I1206 10:29:26.095453  522370 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1206 10:29:26.095474  522370 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1206 10:29:26.095491  522370 command_runner.go:130] >    GitTreeState:   dirty
	I1206 10:29:26.095522  522370 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1206 10:29:26.095561  522370 command_runner.go:130] >    GoVersion:      go1.24.6
	I1206 10:29:26.095582  522370 command_runner.go:130] >    Compiler:       gc
	I1206 10:29:26.095622  522370 command_runner.go:130] >    Platform:       linux/arm64
	I1206 10:29:26.095651  522370 command_runner.go:130] >    Linkmode:       static
	I1206 10:29:26.095669  522370 command_runner.go:130] >    BuildTags:
	I1206 10:29:26.095698  522370 command_runner.go:130] >      static
	I1206 10:29:26.095717  522370 command_runner.go:130] >      netgo
	I1206 10:29:26.095735  522370 command_runner.go:130] >      osusergo
	I1206 10:29:26.095756  522370 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1206 10:29:26.095787  522370 command_runner.go:130] >      seccomp
	I1206 10:29:26.095810  522370 command_runner.go:130] >      apparmor
	I1206 10:29:26.095867  522370 command_runner.go:130] >      selinux
	I1206 10:29:26.095888  522370 command_runner.go:130] >    LDFlags:          unknown
	I1206 10:29:26.095910  522370 command_runner.go:130] >    SeccompEnabled:   true
	I1206 10:29:26.095930  522370 command_runner.go:130] >    AppArmorEnabled:  false
	I1206 10:29:26.103062  522370 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:29:26.105990  522370 cli_runner.go:164] Run: docker network inspect functional-123579 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:29:26.122102  522370 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:29:26.125939  522370 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1206 10:29:26.126304  522370 kubeadm.go:884] updating cluster {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:29:26.126416  522370 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:29:26.126475  522370 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:29:26.161627  522370 command_runner.go:130] > {
	I1206 10:29:26.161646  522370 command_runner.go:130] >   "images":  [
	I1206 10:29:26.161650  522370 command_runner.go:130] >     {
	I1206 10:29:26.161662  522370 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:29:26.161666  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161672  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:29:26.161676  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161681  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161689  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1206 10:29:26.161697  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1206 10:29:26.161702  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161707  522370 command_runner.go:130] >       "size":  "111333938",
	I1206 10:29:26.161711  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.161719  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.161729  522370 command_runner.go:130] >     },
	I1206 10:29:26.161732  522370 command_runner.go:130] >     {
	I1206 10:29:26.161739  522370 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:29:26.161743  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161748  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:29:26.161751  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161757  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161765  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1206 10:29:26.161774  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:29:26.161777  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161781  522370 command_runner.go:130] >       "size":  "29037500",
	I1206 10:29:26.161785  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.161792  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.161795  522370 command_runner.go:130] >     },
	I1206 10:29:26.161799  522370 command_runner.go:130] >     {
	I1206 10:29:26.161805  522370 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:29:26.161810  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161815  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:29:26.161818  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161822  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161830  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1206 10:29:26.161838  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1206 10:29:26.161843  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161847  522370 command_runner.go:130] >       "size":  "74491780",
	I1206 10:29:26.161851  522370 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:29:26.161856  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.161859  522370 command_runner.go:130] >     },
	I1206 10:29:26.161863  522370 command_runner.go:130] >     {
	I1206 10:29:26.161869  522370 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:29:26.161873  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161878  522370 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:29:26.161883  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161887  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161898  522370 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1206 10:29:26.161905  522370 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1206 10:29:26.161908  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161912  522370 command_runner.go:130] >       "size":  "60857170",
	I1206 10:29:26.161916  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.161920  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.161923  522370 command_runner.go:130] >       },
	I1206 10:29:26.161935  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.161939  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.161942  522370 command_runner.go:130] >     },
	I1206 10:29:26.161946  522370 command_runner.go:130] >     {
	I1206 10:29:26.161953  522370 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:29:26.161956  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161963  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:29:26.161966  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161970  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161978  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1206 10:29:26.161986  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1206 10:29:26.161990  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161994  522370 command_runner.go:130] >       "size":  "84949999",
	I1206 10:29:26.161997  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.162001  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.162004  522370 command_runner.go:130] >       },
	I1206 10:29:26.162008  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162011  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.162014  522370 command_runner.go:130] >     },
	I1206 10:29:26.162018  522370 command_runner.go:130] >     {
	I1206 10:29:26.162024  522370 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:29:26.162028  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.162033  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:29:26.162037  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162041  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.162050  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1206 10:29:26.162067  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1206 10:29:26.162071  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162075  522370 command_runner.go:130] >       "size":  "72170325",
	I1206 10:29:26.162081  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.162091  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.162094  522370 command_runner.go:130] >       },
	I1206 10:29:26.162098  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162102  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.162105  522370 command_runner.go:130] >     },
	I1206 10:29:26.162115  522370 command_runner.go:130] >     {
	I1206 10:29:26.162123  522370 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:29:26.162128  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.162134  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:29:26.162137  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162143  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.162154  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1206 10:29:26.162163  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:29:26.162166  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162170  522370 command_runner.go:130] >       "size":  "74106775",
	I1206 10:29:26.162173  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162178  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.162181  522370 command_runner.go:130] >     },
	I1206 10:29:26.162184  522370 command_runner.go:130] >     {
	I1206 10:29:26.162191  522370 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:29:26.162194  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.162200  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:29:26.162203  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162207  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.162215  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1206 10:29:26.162232  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1206 10:29:26.162235  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162239  522370 command_runner.go:130] >       "size":  "49822549",
	I1206 10:29:26.162243  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.162250  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.162253  522370 command_runner.go:130] >       },
	I1206 10:29:26.162257  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162260  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.162263  522370 command_runner.go:130] >     },
	I1206 10:29:26.162267  522370 command_runner.go:130] >     {
	I1206 10:29:26.162273  522370 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:29:26.162277  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.162281  522370 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:29:26.162284  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162288  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.162296  522370 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1206 10:29:26.162304  522370 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1206 10:29:26.162307  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162311  522370 command_runner.go:130] >       "size":  "519884",
	I1206 10:29:26.162315  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.162318  522370 command_runner.go:130] >         "value":  "65535"
	I1206 10:29:26.162321  522370 command_runner.go:130] >       },
	I1206 10:29:26.162325  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162329  522370 command_runner.go:130] >       "pinned":  true
	I1206 10:29:26.162333  522370 command_runner.go:130] >     }
	I1206 10:29:26.162336  522370 command_runner.go:130] >   ]
	I1206 10:29:26.162339  522370 command_runner.go:130] > }
	I1206 10:29:26.164653  522370 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:29:26.164677  522370 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:29:26.164733  522370 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:29:26.190066  522370 command_runner.go:130] > {
	I1206 10:29:26.190096  522370 command_runner.go:130] >   "images":  [
	I1206 10:29:26.190102  522370 command_runner.go:130] >     {
	I1206 10:29:26.190111  522370 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:29:26.190116  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190122  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:29:26.190126  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190130  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190139  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1206 10:29:26.190147  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1206 10:29:26.190155  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190160  522370 command_runner.go:130] >       "size":  "111333938",
	I1206 10:29:26.190164  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190168  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190171  522370 command_runner.go:130] >     },
	I1206 10:29:26.190174  522370 command_runner.go:130] >     {
	I1206 10:29:26.190181  522370 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:29:26.190184  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190189  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:29:26.190193  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190197  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190205  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1206 10:29:26.190213  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:29:26.190216  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190220  522370 command_runner.go:130] >       "size":  "29037500",
	I1206 10:29:26.190224  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190229  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190232  522370 command_runner.go:130] >     },
	I1206 10:29:26.190235  522370 command_runner.go:130] >     {
	I1206 10:29:26.190241  522370 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:29:26.190245  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190250  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:29:26.190254  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190257  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190265  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1206 10:29:26.190273  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1206 10:29:26.190277  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190281  522370 command_runner.go:130] >       "size":  "74491780",
	I1206 10:29:26.190285  522370 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:29:26.190289  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190292  522370 command_runner.go:130] >     },
	I1206 10:29:26.190295  522370 command_runner.go:130] >     {
	I1206 10:29:26.190301  522370 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:29:26.190308  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190313  522370 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:29:26.190317  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190322  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190329  522370 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1206 10:29:26.190336  522370 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1206 10:29:26.190339  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190343  522370 command_runner.go:130] >       "size":  "60857170",
	I1206 10:29:26.190346  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190350  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.190353  522370 command_runner.go:130] >       },
	I1206 10:29:26.190364  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190369  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190372  522370 command_runner.go:130] >     },
	I1206 10:29:26.190374  522370 command_runner.go:130] >     {
	I1206 10:29:26.190381  522370 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:29:26.190384  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190389  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:29:26.190392  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190396  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190403  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1206 10:29:26.190412  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1206 10:29:26.190415  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190419  522370 command_runner.go:130] >       "size":  "84949999",
	I1206 10:29:26.190422  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190425  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.190428  522370 command_runner.go:130] >       },
	I1206 10:29:26.190432  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190436  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190439  522370 command_runner.go:130] >     },
	I1206 10:29:26.190441  522370 command_runner.go:130] >     {
	I1206 10:29:26.190448  522370 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:29:26.190452  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190460  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:29:26.190464  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190467  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190476  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1206 10:29:26.190484  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1206 10:29:26.190486  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190490  522370 command_runner.go:130] >       "size":  "72170325",
	I1206 10:29:26.190493  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190497  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.190500  522370 command_runner.go:130] >       },
	I1206 10:29:26.190504  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190507  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190514  522370 command_runner.go:130] >     },
	I1206 10:29:26.190517  522370 command_runner.go:130] >     {
	I1206 10:29:26.190524  522370 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:29:26.190528  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190533  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:29:26.190536  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190540  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190547  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1206 10:29:26.190554  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:29:26.190557  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190561  522370 command_runner.go:130] >       "size":  "74106775",
	I1206 10:29:26.190565  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190569  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190572  522370 command_runner.go:130] >     },
	I1206 10:29:26.190574  522370 command_runner.go:130] >     {
	I1206 10:29:26.190581  522370 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:29:26.190584  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190590  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:29:26.190593  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190597  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190604  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1206 10:29:26.190628  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1206 10:29:26.190632  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190636  522370 command_runner.go:130] >       "size":  "49822549",
	I1206 10:29:26.190639  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190643  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.190646  522370 command_runner.go:130] >       },
	I1206 10:29:26.190650  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190653  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190656  522370 command_runner.go:130] >     },
	I1206 10:29:26.190659  522370 command_runner.go:130] >     {
	I1206 10:29:26.190665  522370 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:29:26.190669  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190673  522370 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:29:26.190676  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190680  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190687  522370 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1206 10:29:26.190694  522370 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1206 10:29:26.190697  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190701  522370 command_runner.go:130] >       "size":  "519884",
	I1206 10:29:26.190705  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190709  522370 command_runner.go:130] >         "value":  "65535"
	I1206 10:29:26.190712  522370 command_runner.go:130] >       },
	I1206 10:29:26.190716  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190719  522370 command_runner.go:130] >       "pinned":  true
	I1206 10:29:26.190722  522370 command_runner.go:130] >     }
	I1206 10:29:26.190724  522370 command_runner.go:130] >   ]
	I1206 10:29:26.190728  522370 command_runner.go:130] > }
	I1206 10:29:26.192099  522370 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:29:26.192121  522370 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:29:26.192130  522370 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:29:26.192245  522370 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-123579 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:29:26.192338  522370 ssh_runner.go:195] Run: crio config
	I1206 10:29:26.220366  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.219989922Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1206 10:29:26.220411  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.220176363Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1206 10:29:26.220654  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.22050187Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1206 10:29:26.220871  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.220715248Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1206 10:29:26.221165  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.22098899Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:26.221621  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.221432459Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1206 10:29:26.238478  522370 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1206 10:29:26.263608  522370 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1206 10:29:26.263638  522370 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1206 10:29:26.263647  522370 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1206 10:29:26.263651  522370 command_runner.go:130] > #
	I1206 10:29:26.263687  522370 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1206 10:29:26.263707  522370 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1206 10:29:26.263714  522370 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1206 10:29:26.263721  522370 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1206 10:29:26.263726  522370 command_runner.go:130] > # reload'.
	I1206 10:29:26.263732  522370 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1206 10:29:26.263756  522370 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1206 10:29:26.263778  522370 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1206 10:29:26.263789  522370 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1206 10:29:26.263793  522370 command_runner.go:130] > [crio]
	I1206 10:29:26.263802  522370 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1206 10:29:26.263811  522370 command_runner.go:130] > # containers images, in this directory.
	I1206 10:29:26.263826  522370 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1206 10:29:26.263848  522370 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1206 10:29:26.263868  522370 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1206 10:29:26.263877  522370 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1206 10:29:26.263885  522370 command_runner.go:130] > # imagestore = ""
	I1206 10:29:26.263894  522370 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1206 10:29:26.263901  522370 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1206 10:29:26.263908  522370 command_runner.go:130] > # storage_driver = "overlay"
	I1206 10:29:26.263914  522370 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1206 10:29:26.263920  522370 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1206 10:29:26.263936  522370 command_runner.go:130] > # storage_option = [
	I1206 10:29:26.263952  522370 command_runner.go:130] > # ]
	I1206 10:29:26.263965  522370 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1206 10:29:26.263972  522370 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1206 10:29:26.263985  522370 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1206 10:29:26.263995  522370 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1206 10:29:26.264002  522370 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1206 10:29:26.264006  522370 command_runner.go:130] > # always happen on a node reboot
	I1206 10:29:26.264013  522370 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1206 10:29:26.264036  522370 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1206 10:29:26.264050  522370 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1206 10:29:26.264055  522370 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1206 10:29:26.264060  522370 command_runner.go:130] > # version_file_persist = ""
	I1206 10:29:26.264078  522370 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1206 10:29:26.264092  522370 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1206 10:29:26.264096  522370 command_runner.go:130] > # internal_wipe = true
	I1206 10:29:26.264105  522370 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1206 10:29:26.264113  522370 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1206 10:29:26.264117  522370 command_runner.go:130] > # internal_repair = true
	I1206 10:29:26.264124  522370 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1206 10:29:26.264131  522370 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1206 10:29:26.264150  522370 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1206 10:29:26.264171  522370 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1206 10:29:26.264181  522370 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1206 10:29:26.264188  522370 command_runner.go:130] > [crio.api]
	I1206 10:29:26.264194  522370 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1206 10:29:26.264202  522370 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1206 10:29:26.264208  522370 command_runner.go:130] > # IP address on which the stream server will listen.
	I1206 10:29:26.264214  522370 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1206 10:29:26.264221  522370 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1206 10:29:26.264226  522370 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1206 10:29:26.264241  522370 command_runner.go:130] > # stream_port = "0"
	I1206 10:29:26.264256  522370 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1206 10:29:26.264261  522370 command_runner.go:130] > # stream_enable_tls = false
	I1206 10:29:26.264279  522370 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1206 10:29:26.264295  522370 command_runner.go:130] > # stream_idle_timeout = ""
	I1206 10:29:26.264302  522370 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1206 10:29:26.264317  522370 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1206 10:29:26.264326  522370 command_runner.go:130] > # stream_tls_cert = ""
	I1206 10:29:26.264332  522370 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1206 10:29:26.264338  522370 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1206 10:29:26.264355  522370 command_runner.go:130] > # stream_tls_key = ""
	I1206 10:29:26.264373  522370 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1206 10:29:26.264389  522370 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1206 10:29:26.264395  522370 command_runner.go:130] > # automatically pick up the changes.
	I1206 10:29:26.264399  522370 command_runner.go:130] > # stream_tls_ca = ""
	I1206 10:29:26.264435  522370 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1206 10:29:26.264448  522370 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1206 10:29:26.264456  522370 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1206 10:29:26.264460  522370 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1206 10:29:26.264467  522370 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1206 10:29:26.264476  522370 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1206 10:29:26.264479  522370 command_runner.go:130] > [crio.runtime]
	I1206 10:29:26.264489  522370 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1206 10:29:26.264495  522370 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1206 10:29:26.264506  522370 command_runner.go:130] > # "nofile=1024:2048"
	I1206 10:29:26.264513  522370 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1206 10:29:26.264524  522370 command_runner.go:130] > # default_ulimits = [
	I1206 10:29:26.264527  522370 command_runner.go:130] > # ]
	I1206 10:29:26.264534  522370 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1206 10:29:26.264543  522370 command_runner.go:130] > # no_pivot = false
	I1206 10:29:26.264549  522370 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1206 10:29:26.264555  522370 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1206 10:29:26.264561  522370 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1206 10:29:26.264569  522370 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1206 10:29:26.264576  522370 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1206 10:29:26.264584  522370 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 10:29:26.264591  522370 command_runner.go:130] > # conmon = ""
	I1206 10:29:26.264595  522370 command_runner.go:130] > # Cgroup setting for conmon
	I1206 10:29:26.264602  522370 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1206 10:29:26.264612  522370 command_runner.go:130] > conmon_cgroup = "pod"
	I1206 10:29:26.264623  522370 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1206 10:29:26.264629  522370 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1206 10:29:26.264643  522370 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 10:29:26.264647  522370 command_runner.go:130] > # conmon_env = [
	I1206 10:29:26.264650  522370 command_runner.go:130] > # ]
	I1206 10:29:26.264655  522370 command_runner.go:130] > # Additional environment variables to set for all the
	I1206 10:29:26.264660  522370 command_runner.go:130] > # containers. These are overridden if set in the
	I1206 10:29:26.264668  522370 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1206 10:29:26.264674  522370 command_runner.go:130] > # default_env = [
	I1206 10:29:26.264677  522370 command_runner.go:130] > # ]
	I1206 10:29:26.264683  522370 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1206 10:29:26.264699  522370 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1206 10:29:26.264703  522370 command_runner.go:130] > # selinux = false
	I1206 10:29:26.264710  522370 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1206 10:29:26.264720  522370 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1206 10:29:26.264729  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.264734  522370 command_runner.go:130] > # seccomp_profile = ""
	I1206 10:29:26.264740  522370 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1206 10:29:26.264745  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.264751  522370 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1206 10:29:26.264759  522370 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1206 10:29:26.264767  522370 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1206 10:29:26.264774  522370 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1206 10:29:26.264789  522370 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1206 10:29:26.264794  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.264799  522370 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1206 10:29:26.264807  522370 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1206 10:29:26.264817  522370 command_runner.go:130] > # the cgroup blockio controller.
	I1206 10:29:26.264821  522370 command_runner.go:130] > # blockio_config_file = ""
	I1206 10:29:26.264828  522370 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1206 10:29:26.264834  522370 command_runner.go:130] > # blockio parameters.
	I1206 10:29:26.264838  522370 command_runner.go:130] > # blockio_reload = false
	I1206 10:29:26.264849  522370 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1206 10:29:26.264856  522370 command_runner.go:130] > # irqbalance daemon.
	I1206 10:29:26.264862  522370 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1206 10:29:26.264868  522370 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1206 10:29:26.264877  522370 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1206 10:29:26.264889  522370 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1206 10:29:26.264897  522370 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1206 10:29:26.264904  522370 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1206 10:29:26.264910  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.264917  522370 command_runner.go:130] > # rdt_config_file = ""
	I1206 10:29:26.264922  522370 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1206 10:29:26.264926  522370 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1206 10:29:26.264932  522370 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1206 10:29:26.264936  522370 command_runner.go:130] > # separate_pull_cgroup = ""
	I1206 10:29:26.264946  522370 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1206 10:29:26.264954  522370 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1206 10:29:26.264958  522370 command_runner.go:130] > # will be added.
	I1206 10:29:26.264966  522370 command_runner.go:130] > # default_capabilities = [
	I1206 10:29:26.264970  522370 command_runner.go:130] > # 	"CHOWN",
	I1206 10:29:26.264974  522370 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1206 10:29:26.264986  522370 command_runner.go:130] > # 	"FSETID",
	I1206 10:29:26.264990  522370 command_runner.go:130] > # 	"FOWNER",
	I1206 10:29:26.264993  522370 command_runner.go:130] > # 	"SETGID",
	I1206 10:29:26.264996  522370 command_runner.go:130] > # 	"SETUID",
	I1206 10:29:26.265019  522370 command_runner.go:130] > # 	"SETPCAP",
	I1206 10:29:26.265029  522370 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1206 10:29:26.265035  522370 command_runner.go:130] > # 	"KILL",
	I1206 10:29:26.265038  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265046  522370 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1206 10:29:26.265056  522370 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1206 10:29:26.265061  522370 command_runner.go:130] > # add_inheritable_capabilities = false
	I1206 10:29:26.265069  522370 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1206 10:29:26.265075  522370 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 10:29:26.265088  522370 command_runner.go:130] > default_sysctls = [
	I1206 10:29:26.265093  522370 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1206 10:29:26.265096  522370 command_runner.go:130] > ]
	I1206 10:29:26.265101  522370 command_runner.go:130] > # List of devices on the host that a
	I1206 10:29:26.265110  522370 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1206 10:29:26.265114  522370 command_runner.go:130] > # allowed_devices = [
	I1206 10:29:26.265118  522370 command_runner.go:130] > # 	"/dev/fuse",
	I1206 10:29:26.265123  522370 command_runner.go:130] > # 	"/dev/net/tun",
	I1206 10:29:26.265127  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265134  522370 command_runner.go:130] > # List of additional devices. specified as
	I1206 10:29:26.265142  522370 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1206 10:29:26.265150  522370 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1206 10:29:26.265156  522370 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 10:29:26.265160  522370 command_runner.go:130] > # additional_devices = [
	I1206 10:29:26.265164  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265169  522370 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1206 10:29:26.265179  522370 command_runner.go:130] > # cdi_spec_dirs = [
	I1206 10:29:26.265184  522370 command_runner.go:130] > # 	"/etc/cdi",
	I1206 10:29:26.265188  522370 command_runner.go:130] > # 	"/var/run/cdi",
	I1206 10:29:26.265194  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265200  522370 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1206 10:29:26.265206  522370 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1206 10:29:26.265213  522370 command_runner.go:130] > # Defaults to false.
	I1206 10:29:26.265218  522370 command_runner.go:130] > # device_ownership_from_security_context = false
	I1206 10:29:26.265225  522370 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1206 10:29:26.265233  522370 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1206 10:29:26.265237  522370 command_runner.go:130] > # hooks_dir = [
	I1206 10:29:26.265245  522370 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1206 10:29:26.265248  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265264  522370 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1206 10:29:26.265271  522370 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1206 10:29:26.265277  522370 command_runner.go:130] > # its default mounts from the following two files:
	I1206 10:29:26.265282  522370 command_runner.go:130] > #
	I1206 10:29:26.265293  522370 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1206 10:29:26.265302  522370 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1206 10:29:26.265309  522370 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1206 10:29:26.265312  522370 command_runner.go:130] > #
	I1206 10:29:26.265319  522370 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1206 10:29:26.265333  522370 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1206 10:29:26.265340  522370 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1206 10:29:26.265345  522370 command_runner.go:130] > #      only add mounts it finds in this file.
	I1206 10:29:26.265351  522370 command_runner.go:130] > #
	I1206 10:29:26.265355  522370 command_runner.go:130] > # default_mounts_file = ""
	I1206 10:29:26.265360  522370 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1206 10:29:26.265367  522370 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1206 10:29:26.265371  522370 command_runner.go:130] > # pids_limit = -1
	I1206 10:29:26.265378  522370 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1206 10:29:26.265386  522370 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1206 10:29:26.265392  522370 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1206 10:29:26.265403  522370 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1206 10:29:26.265407  522370 command_runner.go:130] > # log_size_max = -1
	I1206 10:29:26.265416  522370 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1206 10:29:26.265423  522370 command_runner.go:130] > # log_to_journald = false
	I1206 10:29:26.265431  522370 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1206 10:29:26.265437  522370 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1206 10:29:26.265448  522370 command_runner.go:130] > # Path to directory for container attach sockets.
	I1206 10:29:26.265453  522370 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1206 10:29:26.265458  522370 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1206 10:29:26.265464  522370 command_runner.go:130] > # bind_mount_prefix = ""
	I1206 10:29:26.265470  522370 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1206 10:29:26.265476  522370 command_runner.go:130] > # read_only = false
	I1206 10:29:26.265482  522370 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1206 10:29:26.265491  522370 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1206 10:29:26.265495  522370 command_runner.go:130] > # live configuration reload.
	I1206 10:29:26.265508  522370 command_runner.go:130] > # log_level = "info"
	I1206 10:29:26.265514  522370 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1206 10:29:26.265523  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.265529  522370 command_runner.go:130] > # log_filter = ""
	I1206 10:29:26.265536  522370 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1206 10:29:26.265542  522370 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1206 10:29:26.265548  522370 command_runner.go:130] > # separated by comma.
	I1206 10:29:26.265557  522370 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:29:26.265564  522370 command_runner.go:130] > # uid_mappings = ""
	I1206 10:29:26.265570  522370 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1206 10:29:26.265578  522370 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1206 10:29:26.265586  522370 command_runner.go:130] > # separated by comma.
	I1206 10:29:26.265597  522370 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:29:26.265602  522370 command_runner.go:130] > # gid_mappings = ""
	I1206 10:29:26.265611  522370 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1206 10:29:26.265620  522370 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 10:29:26.265626  522370 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 10:29:26.265635  522370 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:29:26.265642  522370 command_runner.go:130] > # minimum_mappable_uid = -1
	I1206 10:29:26.265648  522370 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1206 10:29:26.265656  522370 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 10:29:26.265663  522370 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 10:29:26.265680  522370 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:29:26.265684  522370 command_runner.go:130] > # minimum_mappable_gid = -1
	I1206 10:29:26.265691  522370 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1206 10:29:26.265701  522370 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1206 10:29:26.265707  522370 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1206 10:29:26.265713  522370 command_runner.go:130] > # ctr_stop_timeout = 30
	I1206 10:29:26.265719  522370 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1206 10:29:26.265727  522370 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1206 10:29:26.265733  522370 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1206 10:29:26.265740  522370 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1206 10:29:26.265747  522370 command_runner.go:130] > # drop_infra_ctr = true
	I1206 10:29:26.265754  522370 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1206 10:29:26.265768  522370 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1206 10:29:26.265780  522370 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1206 10:29:26.265787  522370 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1206 10:29:26.265794  522370 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1206 10:29:26.265801  522370 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1206 10:29:26.265809  522370 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1206 10:29:26.265814  522370 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1206 10:29:26.265818  522370 command_runner.go:130] > # shared_cpuset = ""
	I1206 10:29:26.265824  522370 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1206 10:29:26.265832  522370 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1206 10:29:26.265838  522370 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1206 10:29:26.265846  522370 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1206 10:29:26.265857  522370 command_runner.go:130] > # pinns_path = ""
	I1206 10:29:26.265863  522370 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1206 10:29:26.265869  522370 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1206 10:29:26.265874  522370 command_runner.go:130] > # enable_criu_support = true
	I1206 10:29:26.265881  522370 command_runner.go:130] > # Enable/disable the generation of the container,
	I1206 10:29:26.265887  522370 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1206 10:29:26.265894  522370 command_runner.go:130] > # enable_pod_events = false
	I1206 10:29:26.265901  522370 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1206 10:29:26.265906  522370 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1206 10:29:26.265910  522370 command_runner.go:130] > # default_runtime = "crun"
	I1206 10:29:26.265915  522370 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1206 10:29:26.265925  522370 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1206 10:29:26.265945  522370 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1206 10:29:26.265951  522370 command_runner.go:130] > # creation as a file is not desired either.
	I1206 10:29:26.265960  522370 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1206 10:29:26.265970  522370 command_runner.go:130] > # the hostname is being managed dynamically.
	I1206 10:29:26.265974  522370 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1206 10:29:26.265977  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265984  522370 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1206 10:29:26.265993  522370 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1206 10:29:26.265999  522370 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1206 10:29:26.266004  522370 command_runner.go:130] > # Each entry in the table should follow the format:
	I1206 10:29:26.266011  522370 command_runner.go:130] > #
	I1206 10:29:26.266019  522370 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1206 10:29:26.266024  522370 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1206 10:29:26.266030  522370 command_runner.go:130] > # runtime_type = "oci"
	I1206 10:29:26.266035  522370 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1206 10:29:26.266042  522370 command_runner.go:130] > # inherit_default_runtime = false
	I1206 10:29:26.266047  522370 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1206 10:29:26.266059  522370 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1206 10:29:26.266065  522370 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1206 10:29:26.266068  522370 command_runner.go:130] > # monitor_env = []
	I1206 10:29:26.266080  522370 command_runner.go:130] > # privileged_without_host_devices = false
	I1206 10:29:26.266084  522370 command_runner.go:130] > # allowed_annotations = []
	I1206 10:29:26.266090  522370 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1206 10:29:26.266094  522370 command_runner.go:130] > # no_sync_log = false
	I1206 10:29:26.266098  522370 command_runner.go:130] > # default_annotations = {}
	I1206 10:29:26.266105  522370 command_runner.go:130] > # stream_websockets = false
	I1206 10:29:26.266112  522370 command_runner.go:130] > # seccomp_profile = ""
	I1206 10:29:26.266145  522370 command_runner.go:130] > # Where:
	I1206 10:29:26.266155  522370 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1206 10:29:26.266162  522370 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1206 10:29:26.266168  522370 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1206 10:29:26.266182  522370 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1206 10:29:26.266186  522370 command_runner.go:130] > #   in $PATH.
	I1206 10:29:26.266192  522370 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1206 10:29:26.266199  522370 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1206 10:29:26.266206  522370 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1206 10:29:26.266212  522370 command_runner.go:130] > #   state.
	I1206 10:29:26.266218  522370 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1206 10:29:26.266224  522370 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1206 10:29:26.266232  522370 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1206 10:29:26.266239  522370 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1206 10:29:26.266247  522370 command_runner.go:130] > #   the values from the default runtime on load time.
	I1206 10:29:26.266254  522370 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1206 10:29:26.266265  522370 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1206 10:29:26.266275  522370 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1206 10:29:26.266283  522370 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1206 10:29:26.266287  522370 command_runner.go:130] > #   The currently recognized values are:
	I1206 10:29:26.266294  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1206 10:29:26.266304  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1206 10:29:26.266315  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1206 10:29:26.266324  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1206 10:29:26.266332  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1206 10:29:26.266339  522370 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1206 10:29:26.266348  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1206 10:29:26.266356  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1206 10:29:26.266368  522370 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1206 10:29:26.266375  522370 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1206 10:29:26.266382  522370 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1206 10:29:26.266388  522370 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1206 10:29:26.266394  522370 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1206 10:29:26.266410  522370 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1206 10:29:26.266417  522370 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1206 10:29:26.266425  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1206 10:29:26.266435  522370 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1206 10:29:26.266440  522370 command_runner.go:130] > #   deprecated option "conmon".
	I1206 10:29:26.266447  522370 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1206 10:29:26.266455  522370 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1206 10:29:26.266463  522370 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1206 10:29:26.266467  522370 command_runner.go:130] > #   should be moved to the container's cgroup
	I1206 10:29:26.266475  522370 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1206 10:29:26.266479  522370 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1206 10:29:26.266489  522370 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1206 10:29:26.266501  522370 command_runner.go:130] > #   conmon-rs by using:
	I1206 10:29:26.266510  522370 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1206 10:29:26.266520  522370 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1206 10:29:26.266531  522370 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1206 10:29:26.266542  522370 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1206 10:29:26.266552  522370 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1206 10:29:26.266559  522370 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1206 10:29:26.266571  522370 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1206 10:29:26.266585  522370 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1206 10:29:26.266593  522370 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1206 10:29:26.266603  522370 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1206 10:29:26.266610  522370 command_runner.go:130] > #   when a machine crash happens.
	I1206 10:29:26.266617  522370 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1206 10:29:26.266625  522370 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1206 10:29:26.266636  522370 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1206 10:29:26.266641  522370 command_runner.go:130] > #   seccomp profile for the runtime.
	I1206 10:29:26.266647  522370 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1206 10:29:26.266656  522370 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1206 10:29:26.266660  522370 command_runner.go:130] > #
	I1206 10:29:26.266665  522370 command_runner.go:130] > # Using the seccomp notifier feature:
	I1206 10:29:26.266675  522370 command_runner.go:130] > #
	I1206 10:29:26.266682  522370 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1206 10:29:26.266689  522370 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1206 10:29:26.266694  522370 command_runner.go:130] > #
	I1206 10:29:26.266701  522370 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1206 10:29:26.266708  522370 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1206 10:29:26.266711  522370 command_runner.go:130] > #
	I1206 10:29:26.266718  522370 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1206 10:29:26.266723  522370 command_runner.go:130] > # feature.
	I1206 10:29:26.266726  522370 command_runner.go:130] > #
	I1206 10:29:26.266732  522370 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1206 10:29:26.266739  522370 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1206 10:29:26.266747  522370 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1206 10:29:26.266754  522370 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1206 10:29:26.266763  522370 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1206 10:29:26.266768  522370 command_runner.go:130] > #
	I1206 10:29:26.266774  522370 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1206 10:29:26.266786  522370 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1206 10:29:26.266792  522370 command_runner.go:130] > #
	I1206 10:29:26.266800  522370 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1206 10:29:26.266806  522370 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1206 10:29:26.266809  522370 command_runner.go:130] > #
	I1206 10:29:26.266815  522370 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1206 10:29:26.266825  522370 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1206 10:29:26.266831  522370 command_runner.go:130] > # limitation.
	I1206 10:29:26.266835  522370 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1206 10:29:26.266848  522370 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1206 10:29:26.266853  522370 command_runner.go:130] > runtime_type = ""
	I1206 10:29:26.266856  522370 command_runner.go:130] > runtime_root = "/run/crun"
	I1206 10:29:26.266862  522370 command_runner.go:130] > inherit_default_runtime = false
	I1206 10:29:26.266868  522370 command_runner.go:130] > runtime_config_path = ""
	I1206 10:29:26.266873  522370 command_runner.go:130] > container_min_memory = ""
	I1206 10:29:26.266880  522370 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1206 10:29:26.266884  522370 command_runner.go:130] > monitor_cgroup = "pod"
	I1206 10:29:26.266889  522370 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 10:29:26.266892  522370 command_runner.go:130] > allowed_annotations = [
	I1206 10:29:26.266897  522370 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1206 10:29:26.266900  522370 command_runner.go:130] > ]
	I1206 10:29:26.266904  522370 command_runner.go:130] > privileged_without_host_devices = false
	I1206 10:29:26.266911  522370 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1206 10:29:26.266916  522370 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1206 10:29:26.266921  522370 command_runner.go:130] > runtime_type = ""
	I1206 10:29:26.266932  522370 command_runner.go:130] > runtime_root = "/run/runc"
	I1206 10:29:26.266939  522370 command_runner.go:130] > inherit_default_runtime = false
	I1206 10:29:26.266943  522370 command_runner.go:130] > runtime_config_path = ""
	I1206 10:29:26.266947  522370 command_runner.go:130] > container_min_memory = ""
	I1206 10:29:26.266952  522370 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1206 10:29:26.266961  522370 command_runner.go:130] > monitor_cgroup = "pod"
	I1206 10:29:26.266966  522370 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 10:29:26.266970  522370 command_runner.go:130] > privileged_without_host_devices = false
	I1206 10:29:26.266981  522370 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1206 10:29:26.266987  522370 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1206 10:29:26.266995  522370 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1206 10:29:26.267006  522370 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1206 10:29:26.267024  522370 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1206 10:29:26.267035  522370 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1206 10:29:26.267047  522370 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1206 10:29:26.267054  522370 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1206 10:29:26.267063  522370 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1206 10:29:26.267072  522370 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1206 10:29:26.267080  522370 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1206 10:29:26.267087  522370 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1206 10:29:26.267094  522370 command_runner.go:130] > # Example:
	I1206 10:29:26.267098  522370 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1206 10:29:26.267103  522370 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1206 10:29:26.267108  522370 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1206 10:29:26.267132  522370 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1206 10:29:26.267141  522370 command_runner.go:130] > # cpuset = "0-1"
	I1206 10:29:26.267145  522370 command_runner.go:130] > # cpushares = "5"
	I1206 10:29:26.267149  522370 command_runner.go:130] > # cpuquota = "1000"
	I1206 10:29:26.267152  522370 command_runner.go:130] > # cpuperiod = "100000"
	I1206 10:29:26.267156  522370 command_runner.go:130] > # cpulimit = "35"
	I1206 10:29:26.267159  522370 command_runner.go:130] > # Where:
	I1206 10:29:26.267165  522370 command_runner.go:130] > # The workload name is workload-type.
	I1206 10:29:26.267172  522370 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1206 10:29:26.267181  522370 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1206 10:29:26.267188  522370 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1206 10:29:26.267199  522370 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1206 10:29:26.267205  522370 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1206 10:29:26.267210  522370 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1206 10:29:26.267224  522370 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1206 10:29:26.267229  522370 command_runner.go:130] > # Default value is set to true
	I1206 10:29:26.267234  522370 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1206 10:29:26.267244  522370 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1206 10:29:26.267251  522370 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1206 10:29:26.267255  522370 command_runner.go:130] > # Default value is set to 'false'
	I1206 10:29:26.267260  522370 command_runner.go:130] > # disable_hostport_mapping = false
	I1206 10:29:26.267265  522370 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1206 10:29:26.267277  522370 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1206 10:29:26.267283  522370 command_runner.go:130] > # timezone = ""
	I1206 10:29:26.267290  522370 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1206 10:29:26.267293  522370 command_runner.go:130] > #
	I1206 10:29:26.267299  522370 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1206 10:29:26.267310  522370 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1206 10:29:26.267313  522370 command_runner.go:130] > [crio.image]
	I1206 10:29:26.267319  522370 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1206 10:29:26.267324  522370 command_runner.go:130] > # default_transport = "docker://"
	I1206 10:29:26.267332  522370 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1206 10:29:26.267339  522370 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1206 10:29:26.267343  522370 command_runner.go:130] > # global_auth_file = ""
	I1206 10:29:26.267351  522370 command_runner.go:130] > # The image used to instantiate infra containers.
	I1206 10:29:26.267359  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.267364  522370 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1206 10:29:26.267378  522370 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1206 10:29:26.267385  522370 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1206 10:29:26.267396  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.267401  522370 command_runner.go:130] > # pause_image_auth_file = ""
	I1206 10:29:26.267407  522370 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1206 10:29:26.267413  522370 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1206 10:29:26.267421  522370 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1206 10:29:26.267427  522370 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1206 10:29:26.267434  522370 command_runner.go:130] > # pause_command = "/pause"
	I1206 10:29:26.267440  522370 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1206 10:29:26.267447  522370 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1206 10:29:26.267455  522370 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1206 10:29:26.267461  522370 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1206 10:29:26.267471  522370 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1206 10:29:26.267480  522370 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1206 10:29:26.267484  522370 command_runner.go:130] > # pinned_images = [
	I1206 10:29:26.267488  522370 command_runner.go:130] > # ]
	I1206 10:29:26.267494  522370 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1206 10:29:26.267502  522370 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1206 10:29:26.267509  522370 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1206 10:29:26.267517  522370 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1206 10:29:26.267525  522370 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1206 10:29:26.267530  522370 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1206 10:29:26.267538  522370 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1206 10:29:26.267548  522370 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1206 10:29:26.267556  522370 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1206 10:29:26.267566  522370 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1206 10:29:26.267572  522370 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1206 10:29:26.267579  522370 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1206 10:29:26.267587  522370 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1206 10:29:26.267594  522370 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1206 10:29:26.267597  522370 command_runner.go:130] > # changing them here.
	I1206 10:29:26.267603  522370 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1206 10:29:26.267608  522370 command_runner.go:130] > # insecure_registries = [
	I1206 10:29:26.267613  522370 command_runner.go:130] > # ]
	I1206 10:29:26.267620  522370 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1206 10:29:26.267637  522370 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1206 10:29:26.267641  522370 command_runner.go:130] > # image_volumes = "mkdir"
	I1206 10:29:26.267646  522370 command_runner.go:130] > # Temporary directory to use for storing big files
	I1206 10:29:26.267671  522370 command_runner.go:130] > # big_files_temporary_dir = ""
	I1206 10:29:26.267678  522370 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1206 10:29:26.267687  522370 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1206 10:29:26.267699  522370 command_runner.go:130] > # auto_reload_registries = false
	I1206 10:29:26.267706  522370 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1206 10:29:26.267714  522370 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1206 10:29:26.267723  522370 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1206 10:29:26.267732  522370 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1206 10:29:26.267739  522370 command_runner.go:130] > # The mode of short name resolution.
	I1206 10:29:26.267746  522370 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1206 10:29:26.267753  522370 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1206 10:29:26.267758  522370 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1206 10:29:26.267766  522370 command_runner.go:130] > # short_name_mode = "enforcing"
	I1206 10:29:26.267775  522370 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1206 10:29:26.267781  522370 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1206 10:29:26.267788  522370 command_runner.go:130] > # oci_artifact_mount_support = true
	I1206 10:29:26.267795  522370 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1206 10:29:26.267798  522370 command_runner.go:130] > # CNI plugins.
	I1206 10:29:26.267802  522370 command_runner.go:130] > [crio.network]
	I1206 10:29:26.267808  522370 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1206 10:29:26.267816  522370 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1206 10:29:26.267820  522370 command_runner.go:130] > # cni_default_network = ""
	I1206 10:29:26.267826  522370 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1206 10:29:26.267836  522370 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1206 10:29:26.267842  522370 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1206 10:29:26.267845  522370 command_runner.go:130] > # plugin_dirs = [
	I1206 10:29:26.267853  522370 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1206 10:29:26.267856  522370 command_runner.go:130] > # ]
	I1206 10:29:26.267861  522370 command_runner.go:130] > # List of included pod metrics.
	I1206 10:29:26.267867  522370 command_runner.go:130] > # included_pod_metrics = [
	I1206 10:29:26.267870  522370 command_runner.go:130] > # ]
	I1206 10:29:26.267879  522370 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1206 10:29:26.267885  522370 command_runner.go:130] > [crio.metrics]
	I1206 10:29:26.267890  522370 command_runner.go:130] > # Globally enable or disable metrics support.
	I1206 10:29:26.267897  522370 command_runner.go:130] > # enable_metrics = false
	I1206 10:29:26.267902  522370 command_runner.go:130] > # Specify enabled metrics collectors.
	I1206 10:29:26.267906  522370 command_runner.go:130] > # Per default all metrics are enabled.
	I1206 10:29:26.267912  522370 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1206 10:29:26.267919  522370 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1206 10:29:26.267925  522370 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1206 10:29:26.267938  522370 command_runner.go:130] > # metrics_collectors = [
	I1206 10:29:26.267943  522370 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1206 10:29:26.267947  522370 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1206 10:29:26.267951  522370 command_runner.go:130] > # 	"containers_oom_total",
	I1206 10:29:26.267954  522370 command_runner.go:130] > # 	"processes_defunct",
	I1206 10:29:26.267958  522370 command_runner.go:130] > # 	"operations_total",
	I1206 10:29:26.267962  522370 command_runner.go:130] > # 	"operations_latency_seconds",
	I1206 10:29:26.267966  522370 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1206 10:29:26.267970  522370 command_runner.go:130] > # 	"operations_errors_total",
	I1206 10:29:26.267977  522370 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1206 10:29:26.267981  522370 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1206 10:29:26.267986  522370 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1206 10:29:26.267990  522370 command_runner.go:130] > # 	"image_pulls_success_total",
	I1206 10:29:26.267993  522370 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1206 10:29:26.267997  522370 command_runner.go:130] > # 	"containers_oom_count_total",
	I1206 10:29:26.268003  522370 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1206 10:29:26.268007  522370 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1206 10:29:26.268011  522370 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1206 10:29:26.268014  522370 command_runner.go:130] > # ]
	I1206 10:29:26.268020  522370 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1206 10:29:26.268024  522370 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1206 10:29:26.268029  522370 command_runner.go:130] > # The port on which the metrics server will listen.
	I1206 10:29:26.268032  522370 command_runner.go:130] > # metrics_port = 9090
	I1206 10:29:26.268037  522370 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1206 10:29:26.268041  522370 command_runner.go:130] > # metrics_socket = ""
	I1206 10:29:26.268046  522370 command_runner.go:130] > # The certificate for the secure metrics server.
	I1206 10:29:26.268052  522370 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1206 10:29:26.268061  522370 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1206 10:29:26.268070  522370 command_runner.go:130] > # certificate on any modification event.
	I1206 10:29:26.268074  522370 command_runner.go:130] > # metrics_cert = ""
	I1206 10:29:26.268079  522370 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1206 10:29:26.268086  522370 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1206 10:29:26.268090  522370 command_runner.go:130] > # metrics_key = ""
	I1206 10:29:26.268099  522370 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1206 10:29:26.268106  522370 command_runner.go:130] > [crio.tracing]
	I1206 10:29:26.268112  522370 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1206 10:29:26.268116  522370 command_runner.go:130] > # enable_tracing = false
	I1206 10:29:26.268121  522370 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1206 10:29:26.268127  522370 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1206 10:29:26.268135  522370 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1206 10:29:26.268143  522370 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1206 10:29:26.268147  522370 command_runner.go:130] > # CRI-O NRI configuration.
	I1206 10:29:26.268150  522370 command_runner.go:130] > [crio.nri]
	I1206 10:29:26.268155  522370 command_runner.go:130] > # Globally enable or disable NRI.
	I1206 10:29:26.268158  522370 command_runner.go:130] > # enable_nri = true
	I1206 10:29:26.268162  522370 command_runner.go:130] > # NRI socket to listen on.
	I1206 10:29:26.268166  522370 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1206 10:29:26.268170  522370 command_runner.go:130] > # NRI plugin directory to use.
	I1206 10:29:26.268174  522370 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1206 10:29:26.268181  522370 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1206 10:29:26.268187  522370 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1206 10:29:26.268195  522370 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1206 10:29:26.268252  522370 command_runner.go:130] > # nri_disable_connections = false
	I1206 10:29:26.268260  522370 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1206 10:29:26.268265  522370 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1206 10:29:26.268270  522370 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1206 10:29:26.268274  522370 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1206 10:29:26.268287  522370 command_runner.go:130] > # NRI default validator configuration.
	I1206 10:29:26.268294  522370 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1206 10:29:26.268307  522370 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1206 10:29:26.268312  522370 command_runner.go:130] > # can be restricted/rejected:
	I1206 10:29:26.268322  522370 command_runner.go:130] > # - OCI hook injection
	I1206 10:29:26.268327  522370 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1206 10:29:26.268333  522370 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1206 10:29:26.268340  522370 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1206 10:29:26.268344  522370 command_runner.go:130] > # - adjustment of linux namespaces
	I1206 10:29:26.268356  522370 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1206 10:29:26.268363  522370 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1206 10:29:26.268368  522370 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1206 10:29:26.268375  522370 command_runner.go:130] > #
	I1206 10:29:26.268380  522370 command_runner.go:130] > # [crio.nri.default_validator]
	I1206 10:29:26.268384  522370 command_runner.go:130] > # nri_enable_default_validator = false
	I1206 10:29:26.268397  522370 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1206 10:29:26.268403  522370 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1206 10:29:26.268408  522370 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1206 10:29:26.268416  522370 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1206 10:29:26.268421  522370 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1206 10:29:26.268425  522370 command_runner.go:130] > # nri_validator_required_plugins = [
	I1206 10:29:26.268431  522370 command_runner.go:130] > # ]
	I1206 10:29:26.268436  522370 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1206 10:29:26.268442  522370 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1206 10:29:26.268446  522370 command_runner.go:130] > [crio.stats]
	I1206 10:29:26.268454  522370 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1206 10:29:26.268465  522370 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1206 10:29:26.268469  522370 command_runner.go:130] > # stats_collection_period = 0
	I1206 10:29:26.268475  522370 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1206 10:29:26.268484  522370 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1206 10:29:26.268489  522370 command_runner.go:130] > # collection_period = 0
	I1206 10:29:26.268581  522370 cni.go:84] Creating CNI manager for ""
	I1206 10:29:26.268595  522370 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:29:26.268620  522370 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:29:26.268646  522370 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-123579 NodeName:functional-123579 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:29:26.268768  522370 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-123579"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:29:26.268849  522370 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:29:26.276198  522370 command_runner.go:130] > kubeadm
	I1206 10:29:26.276217  522370 command_runner.go:130] > kubectl
	I1206 10:29:26.276221  522370 command_runner.go:130] > kubelet
	I1206 10:29:26.277128  522370 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:29:26.277245  522370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:29:26.285085  522370 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:29:26.297894  522370 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:29:26.310811  522370 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1206 10:29:26.323875  522370 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:29:26.327560  522370 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1206 10:29:26.327877  522370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:29:26.463333  522370 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:29:27.181623  522370 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579 for IP: 192.168.49.2
	I1206 10:29:27.181646  522370 certs.go:195] generating shared ca certs ...
	I1206 10:29:27.181662  522370 certs.go:227] acquiring lock for ca certs: {Name:mk654f77abd8383620ce6ddae56f2a6a8c1d96d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:29:27.181794  522370 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key
	I1206 10:29:27.181841  522370 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key
	I1206 10:29:27.181855  522370 certs.go:257] generating profile certs ...
	I1206 10:29:27.181981  522370 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key
	I1206 10:29:27.182049  522370 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key.fda7c087
	I1206 10:29:27.182120  522370 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key
	I1206 10:29:27.182139  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 10:29:27.182178  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 10:29:27.182195  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 10:29:27.182206  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 10:29:27.182221  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 10:29:27.182231  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 10:29:27.182242  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 10:29:27.182252  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 10:29:27.182310  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem (1338 bytes)
	W1206 10:29:27.182343  522370 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068_empty.pem, impossibly tiny 0 bytes
	I1206 10:29:27.182351  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 10:29:27.182391  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:29:27.182420  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:29:27.182445  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem (1675 bytes)
	I1206 10:29:27.182502  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:29:27.182537  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.182553  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem -> /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.182567  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.183155  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:29:27.204776  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:29:27.223807  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:29:27.246828  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 10:29:27.269763  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:29:27.290536  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:29:27.308147  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:29:27.326269  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:29:27.344314  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:29:27.361949  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem --> /usr/share/ca-certificates/488068.pem (1338 bytes)
	I1206 10:29:27.379296  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /usr/share/ca-certificates/4880682.pem (1708 bytes)
	I1206 10:29:27.396825  522370 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:29:27.409539  522370 ssh_runner.go:195] Run: openssl version
	I1206 10:29:27.415501  522370 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1206 10:29:27.415885  522370 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.423483  522370 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488068.pem /etc/ssl/certs/488068.pem
	I1206 10:29:27.431381  522370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.435336  522370 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.435420  522370 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.435491  522370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.477997  522370 command_runner.go:130] > 51391683
	I1206 10:29:27.478450  522370 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:29:27.485910  522370 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.493199  522370 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4880682.pem /etc/ssl/certs/4880682.pem
	I1206 10:29:27.500533  522370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.504197  522370 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.504254  522370 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.504314  522370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.549795  522370 command_runner.go:130] > 3ec20f2e
	I1206 10:29:27.550294  522370 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:29:27.557856  522370 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.565301  522370 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:29:27.572772  522370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.576768  522370 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.576853  522370 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.576925  522370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.618106  522370 command_runner.go:130] > b5213941
	I1206 10:29:27.618536  522370 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:29:27.626130  522370 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:29:27.629702  522370 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:29:27.629728  522370 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1206 10:29:27.629736  522370 command_runner.go:130] > Device: 259,1	Inode: 3640487     Links: 1
	I1206 10:29:27.629742  522370 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:29:27.629749  522370 command_runner.go:130] > Access: 2025-12-06 10:25:18.913466133 +0000
	I1206 10:29:27.629754  522370 command_runner.go:130] > Modify: 2025-12-06 10:21:14.154593310 +0000
	I1206 10:29:27.629758  522370 command_runner.go:130] > Change: 2025-12-06 10:21:14.154593310 +0000
	I1206 10:29:27.629764  522370 command_runner.go:130] >  Birth: 2025-12-06 10:21:14.154593310 +0000
	I1206 10:29:27.629823  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:29:27.670498  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.670941  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:29:27.711871  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.712351  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:29:27.753204  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.753665  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:29:27.795554  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.796089  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:29:27.836809  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.837203  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:29:27.878291  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.878357  522370 kubeadm.go:401] StartCluster: {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:29:27.878433  522370 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:29:27.878503  522370 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:29:27.905835  522370 cri.go:89] found id: ""
	I1206 10:29:27.905910  522370 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:29:27.912750  522370 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 10:29:27.912773  522370 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 10:29:27.912780  522370 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 10:29:27.913690  522370 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:29:27.913706  522370 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:29:27.913783  522370 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:29:27.921335  522370 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:29:27.921755  522370 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-123579" does not appear in /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:27.921867  522370 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-484819/kubeconfig needs updating (will repair): [kubeconfig missing "functional-123579" cluster setting kubeconfig missing "functional-123579" context setting]
	I1206 10:29:27.922200  522370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/kubeconfig: {Name:mk884a72161ed5cd0cfdbffc4a21f277282d705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:29:27.922608  522370 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:27.922766  522370 kapi.go:59] client config for functional-123579: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key", CAFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:29:27.923311  522370 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 10:29:27.923332  522370 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 10:29:27.923338  522370 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 10:29:27.923344  522370 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 10:29:27.923348  522370 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 10:29:27.923710  522370 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:29:27.923805  522370 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1206 10:29:27.932172  522370 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1206 10:29:27.932206  522370 kubeadm.go:602] duration metric: took 18.493373ms to restartPrimaryControlPlane
	I1206 10:29:27.932216  522370 kubeadm.go:403] duration metric: took 53.86688ms to StartCluster
	I1206 10:29:27.932230  522370 settings.go:142] acquiring lock: {Name:mk7eec112652eae38dac4afce804445d9092bd29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:29:27.932300  522370 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:27.932906  522370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/kubeconfig: {Name:mk884a72161ed5cd0cfdbffc4a21f277282d705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:29:27.933111  522370 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:29:27.933400  522370 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:29:27.933457  522370 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:29:27.933598  522370 addons.go:70] Setting storage-provisioner=true in profile "functional-123579"
	I1206 10:29:27.933615  522370 addons.go:239] Setting addon storage-provisioner=true in "functional-123579"
	I1206 10:29:27.933640  522370 host.go:66] Checking if "functional-123579" exists ...
	I1206 10:29:27.933662  522370 addons.go:70] Setting default-storageclass=true in profile "functional-123579"
	I1206 10:29:27.933709  522370 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-123579"
	I1206 10:29:27.934067  522370 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:29:27.934105  522370 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:29:27.937180  522370 out.go:179] * Verifying Kubernetes components...
	I1206 10:29:27.943300  522370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:29:27.955394  522370 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:27.955630  522370 kapi.go:59] client config for functional-123579: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key", CAFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:29:27.955941  522370 addons.go:239] Setting addon default-storageclass=true in "functional-123579"
	I1206 10:29:27.955970  522370 host.go:66] Checking if "functional-123579" exists ...
	I1206 10:29:27.956408  522370 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:29:27.980014  522370 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:29:27.983923  522370 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:27.983954  522370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:29:27.984026  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:27.996144  522370 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:27.996165  522370 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:29:27.996228  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:28.024613  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:28.044906  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:28.158003  522370 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:29:28.171055  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:28.191069  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:28.930363  522370 node_ready.go:35] waiting up to 6m0s for node "functional-123579" to be "Ready" ...
	I1206 10:29:28.930490  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:28.930625  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:28.930666  522370 retry.go:31] will retry after 220.153302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:28.930749  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:28.930787  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:28.930813  522370 retry.go:31] will retry after 205.296978ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:28.930893  522370 type.go:168] "Request Body" body=""
	I1206 10:29:28.930961  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:28.931278  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:29.136761  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:29.151269  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:29.213820  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:29.217541  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.217581  522370 retry.go:31] will retry after 414.855546ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.235243  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:29.235363  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.235412  522370 retry.go:31] will retry after 542.074768ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.431607  522370 type.go:168] "Request Body" body=""
	I1206 10:29:29.431755  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:29.432098  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:29.633557  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:29.704871  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:29.715208  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.715276  522370 retry.go:31] will retry after 512.072151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.778572  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:29.842567  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:29.842631  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.842656  522370 retry.go:31] will retry after 453.896864ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.930817  522370 type.go:168] "Request Body" body=""
	I1206 10:29:29.930917  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:29.931386  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:30.227644  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:30.292361  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:30.292404  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:30.292441  522370 retry.go:31] will retry after 965.22043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:30.297573  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:30.354035  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:30.357760  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:30.357796  522370 retry.go:31] will retry after 830.21573ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:30.430970  522370 type.go:168] "Request Body" body=""
	I1206 10:29:30.431039  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:30.431358  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:30.930753  522370 type.go:168] "Request Body" body=""
	I1206 10:29:30.930859  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:30.931201  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:30.931272  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:31.188810  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:31.258540  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:31.280251  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:31.280382  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:31.280411  522370 retry.go:31] will retry after 670.25639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:31.331402  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:31.331517  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:31.331545  522370 retry.go:31] will retry after 1.065706699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:31.430665  522370 type.go:168] "Request Body" body=""
	I1206 10:29:31.430772  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:31.431166  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:31.930712  522370 type.go:168] "Request Body" body=""
	I1206 10:29:31.930893  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:31.931401  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:31.951563  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:32.028942  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:32.028998  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:32.029018  522370 retry.go:31] will retry after 2.122665166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:32.397466  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:32.431043  522370 type.go:168] "Request Body" body=""
	I1206 10:29:32.431193  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:32.431584  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:32.458856  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:32.458892  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:32.458911  522370 retry.go:31] will retry after 1.728877951s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:32.931628  522370 type.go:168] "Request Body" body=""
	I1206 10:29:32.931705  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:32.932104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:32.932161  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:33.430893  522370 type.go:168] "Request Body" body=""
	I1206 10:29:33.430960  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:33.431324  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:33.930780  522370 type.go:168] "Request Body" body=""
	I1206 10:29:33.930858  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:33.931279  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:34.152755  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:34.188350  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:34.249027  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:34.249069  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:34.249090  522370 retry.go:31] will retry after 3.684646027s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:34.294198  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:34.294244  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:34.294296  522370 retry.go:31] will retry after 1.427612825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:34.431504  522370 type.go:168] "Request Body" body=""
	I1206 10:29:34.431583  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:34.431952  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:34.930685  522370 type.go:168] "Request Body" body=""
	I1206 10:29:34.930753  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:34.931043  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:35.430737  522370 type.go:168] "Request Body" body=""
	I1206 10:29:35.430834  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:35.431191  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:35.431258  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:35.722778  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:35.786215  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:35.786258  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:35.786277  522370 retry.go:31] will retry after 5.772571648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:35.931559  522370 type.go:168] "Request Body" body=""
	I1206 10:29:35.931640  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:35.931966  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:36.431586  522370 type.go:168] "Request Body" body=""
	I1206 10:29:36.431654  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:36.431914  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:36.930676  522370 type.go:168] "Request Body" body=""
	I1206 10:29:36.930756  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:36.931086  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:37.430781  522370 type.go:168] "Request Body" body=""
	I1206 10:29:37.430858  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:37.431219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:37.931472  522370 type.go:168] "Request Body" body=""
	I1206 10:29:37.931560  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:37.931882  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:37.931937  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:37.934240  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:38.012005  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:38.012049  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:38.012071  522370 retry.go:31] will retry after 2.264254307s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:38.430647  522370 type.go:168] "Request Body" body=""
	I1206 10:29:38.430724  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:38.431052  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:38.930775  522370 type.go:168] "Request Body" body=""
	I1206 10:29:38.930848  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:38.931203  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:39.430809  522370 type.go:168] "Request Body" body=""
	I1206 10:29:39.430884  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:39.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:39.930814  522370 type.go:168] "Request Body" body=""
	I1206 10:29:39.930888  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:39.931197  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:40.276629  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:40.338233  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:40.338274  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:40.338294  522370 retry.go:31] will retry after 6.465617702s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:40.431489  522370 type.go:168] "Request Body" body=""
	I1206 10:29:40.431563  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:40.431893  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:40.431948  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:40.931681  522370 type.go:168] "Request Body" body=""
	I1206 10:29:40.931758  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:40.932017  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:41.430778  522370 type.go:168] "Request Body" body=""
	I1206 10:29:41.430862  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:41.431219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:41.559542  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:41.618815  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:41.618852  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:41.618871  522370 retry.go:31] will retry after 5.212992024s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:41.931382  522370 type.go:168] "Request Body" body=""
	I1206 10:29:41.931461  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:41.931787  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:42.431525  522370 type.go:168] "Request Body" body=""
	I1206 10:29:42.431601  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:42.431866  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:42.931428  522370 type.go:168] "Request Body" body=""
	I1206 10:29:42.931503  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:42.931826  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:42.931883  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:43.431618  522370 type.go:168] "Request Body" body=""
	I1206 10:29:43.431692  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:43.432027  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:43.931348  522370 type.go:168] "Request Body" body=""
	I1206 10:29:43.931423  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:43.931690  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:44.431562  522370 type.go:168] "Request Body" body=""
	I1206 10:29:44.431652  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:44.431999  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:44.930672  522370 type.go:168] "Request Body" body=""
	I1206 10:29:44.930749  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:44.931083  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:45.430826  522370 type.go:168] "Request Body" body=""
	I1206 10:29:45.430904  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:45.431191  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:45.431243  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:45.930931  522370 type.go:168] "Request Body" body=""
	I1206 10:29:45.931023  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:45.931426  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:46.430763  522370 type.go:168] "Request Body" body=""
	I1206 10:29:46.430842  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:46.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:46.804868  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:46.832399  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:46.865940  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:46.865975  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:46.865994  522370 retry.go:31] will retry after 4.982943882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:46.906567  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:46.906612  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:46.906632  522370 retry.go:31] will retry after 5.755281988s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:46.930748  522370 type.go:168] "Request Body" body=""
	I1206 10:29:46.930817  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:46.931156  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:47.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:29:47.430851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:47.431185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:47.931383  522370 type.go:168] "Request Body" body=""
	I1206 10:29:47.931460  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:47.931792  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:47.931843  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:48.431576  522370 type.go:168] "Request Body" body=""
	I1206 10:29:48.431652  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:48.431909  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:48.931675  522370 type.go:168] "Request Body" body=""
	I1206 10:29:48.931755  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:48.932083  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:49.430777  522370 type.go:168] "Request Body" body=""
	I1206 10:29:49.430862  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:49.431211  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:49.930899  522370 type.go:168] "Request Body" body=""
	I1206 10:29:49.930969  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:49.931292  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:50.430989  522370 type.go:168] "Request Body" body=""
	I1206 10:29:50.431065  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:50.431426  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:50.431484  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:50.930772  522370 type.go:168] "Request Body" body=""
	I1206 10:29:50.930857  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:50.931213  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:51.430766  522370 type.go:168] "Request Body" body=""
	I1206 10:29:51.430838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:51.431095  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:51.849751  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:51.909824  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:51.909861  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:51.909882  522370 retry.go:31] will retry after 17.161477779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:51.930951  522370 type.go:168] "Request Body" body=""
	I1206 10:29:51.931035  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:51.931342  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:52.431051  522370 type.go:168] "Request Body" body=""
	I1206 10:29:52.431146  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:52.431458  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:52.431512  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:52.663117  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:52.730608  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:52.730656  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:52.730678  522370 retry.go:31] will retry after 12.860735555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:52.931180  522370 type.go:168] "Request Body" body=""
	I1206 10:29:52.931254  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:52.931513  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:53.431586  522370 type.go:168] "Request Body" body=""
	I1206 10:29:53.431665  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:53.432017  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:53.930759  522370 type.go:168] "Request Body" body=""
	I1206 10:29:53.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:53.931169  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:54.430719  522370 type.go:168] "Request Body" body=""
	I1206 10:29:54.430787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:54.431095  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:54.930744  522370 type.go:168] "Request Body" body=""
	I1206 10:29:54.930824  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:54.931164  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:54.931216  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:55.430912  522370 type.go:168] "Request Body" body=""
	I1206 10:29:55.430990  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:55.431336  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:55.930734  522370 type.go:168] "Request Body" body=""
	I1206 10:29:55.930815  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:55.931104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:56.430752  522370 type.go:168] "Request Body" body=""
	I1206 10:29:56.430830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:56.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:56.930913  522370 type.go:168] "Request Body" body=""
	I1206 10:29:56.931011  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:56.931387  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:56.931449  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:57.430732  522370 type.go:168] "Request Body" body=""
	I1206 10:29:57.430809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:57.431149  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:57.931360  522370 type.go:168] "Request Body" body=""
	I1206 10:29:57.931442  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:57.931792  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:58.431399  522370 type.go:168] "Request Body" body=""
	I1206 10:29:58.431472  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:58.431799  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:58.931551  522370 type.go:168] "Request Body" body=""
	I1206 10:29:58.931619  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:58.931871  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:58.931909  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:59.431652  522370 type.go:168] "Request Body" body=""
	I1206 10:29:59.431735  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:59.432062  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:59.930743  522370 type.go:168] "Request Body" body=""
	I1206 10:29:59.930819  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:59.931185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:00.449420  522370 type.go:168] "Request Body" body=""
	I1206 10:30:00.449497  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:00.449815  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:00.931632  522370 type.go:168] "Request Body" body=""
	I1206 10:30:00.931721  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:00.932114  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:00.932186  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:01.430891  522370 type.go:168] "Request Body" body=""
	I1206 10:30:01.430971  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:01.431362  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:01.930910  522370 type.go:168] "Request Body" body=""
	I1206 10:30:01.930981  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:01.931281  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:02.430983  522370 type.go:168] "Request Body" body=""
	I1206 10:30:02.431111  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:02.431463  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:02.931309  522370 type.go:168] "Request Body" body=""
	I1206 10:30:02.931390  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:02.931736  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:03.431532  522370 type.go:168] "Request Body" body=""
	I1206 10:30:03.431608  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:03.431873  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:03.431923  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:03.931662  522370 type.go:168] "Request Body" body=""
	I1206 10:30:03.931740  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:03.932084  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:04.430758  522370 type.go:168] "Request Body" body=""
	I1206 10:30:04.430838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:04.431226  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:04.930979  522370 type.go:168] "Request Body" body=""
	I1206 10:30:04.931048  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:04.931324  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:05.430768  522370 type.go:168] "Request Body" body=""
	I1206 10:30:05.430842  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:05.431235  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:05.591568  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:05.650107  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:05.653722  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:05.653756  522370 retry.go:31] will retry after 16.31009922s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:05.931225  522370 type.go:168] "Request Body" body=""
	I1206 10:30:05.931303  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:05.931640  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:05.931697  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:06.431453  522370 type.go:168] "Request Body" body=""
	I1206 10:30:06.431523  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:06.431774  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:06.931557  522370 type.go:168] "Request Body" body=""
	I1206 10:30:06.931629  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:06.931951  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:07.431619  522370 type.go:168] "Request Body" body=""
	I1206 10:30:07.431700  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:07.432067  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:07.931280  522370 type.go:168] "Request Body" body=""
	I1206 10:30:07.931358  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:07.931625  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:08.431487  522370 type.go:168] "Request Body" body=""
	I1206 10:30:08.431561  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:08.431928  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:08.431989  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:08.930675  522370 type.go:168] "Request Body" body=""
	I1206 10:30:08.930751  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:08.931076  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:09.072554  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:09.131495  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:09.131531  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:09.131550  522370 retry.go:31] will retry after 16.873374267s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:09.430840  522370 type.go:168] "Request Body" body=""
	I1206 10:30:09.430908  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:09.431218  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:09.930794  522370 type.go:168] "Request Body" body=""
	I1206 10:30:09.930868  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:09.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:10.430728  522370 type.go:168] "Request Body" body=""
	I1206 10:30:10.430802  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:10.431168  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:10.930730  522370 type.go:168] "Request Body" body=""
	I1206 10:30:10.930805  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:10.931062  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:10.931111  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:11.430884  522370 type.go:168] "Request Body" body=""
	I1206 10:30:11.430959  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:11.431276  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:11.930807  522370 type.go:168] "Request Body" body=""
	I1206 10:30:11.930877  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:11.931199  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:12.430821  522370 type.go:168] "Request Body" body=""
	I1206 10:30:12.430897  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:12.431230  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:12.931320  522370 type.go:168] "Request Body" body=""
	I1206 10:30:12.931390  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:12.931738  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:12.931801  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:13.431588  522370 type.go:168] "Request Body" body=""
	I1206 10:30:13.431660  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:13.432007  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:13.930697  522370 type.go:168] "Request Body" body=""
	I1206 10:30:13.930795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:13.931074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:14.430876  522370 type.go:168] "Request Body" body=""
	I1206 10:30:14.430958  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:14.431286  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:14.930809  522370 type.go:168] "Request Body" body=""
	I1206 10:30:14.930888  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:14.931234  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:15.430953  522370 type.go:168] "Request Body" body=""
	I1206 10:30:15.431021  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:15.431299  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:15.431359  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:15.930760  522370 type.go:168] "Request Body" body=""
	I1206 10:30:15.930854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:15.931202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:16.430790  522370 type.go:168] "Request Body" body=""
	I1206 10:30:16.430862  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:16.431183  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:16.930736  522370 type.go:168] "Request Body" body=""
	I1206 10:30:16.930809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:16.931077  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:17.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:30:17.430824  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:17.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:17.931237  522370 type.go:168] "Request Body" body=""
	I1206 10:30:17.931314  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:17.931645  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:17.931700  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:18.431393  522370 type.go:168] "Request Body" body=""
	I1206 10:30:18.431479  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:18.431748  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:18.931581  522370 type.go:168] "Request Body" body=""
	I1206 10:30:18.931653  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:18.931971  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:19.430702  522370 type.go:168] "Request Body" body=""
	I1206 10:30:19.430780  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:19.431097  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:19.930811  522370 type.go:168] "Request Body" body=""
	I1206 10:30:19.930888  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:19.931178  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:20.430768  522370 type.go:168] "Request Body" body=""
	I1206 10:30:20.430839  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:20.431197  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:20.431259  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:20.930943  522370 type.go:168] "Request Body" body=""
	I1206 10:30:20.931019  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:20.931387  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:21.431075  522370 type.go:168] "Request Body" body=""
	I1206 10:30:21.431159  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:21.431476  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:21.930790  522370 type.go:168] "Request Body" body=""
	I1206 10:30:21.930867  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:21.931207  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:21.964425  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:22.031284  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:22.031334  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:22.031356  522370 retry.go:31] will retry after 35.791693435s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:22.430787  522370 type.go:168] "Request Body" body=""
	I1206 10:30:22.430867  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:22.431181  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:22.930968  522370 type.go:168] "Request Body" body=""
	I1206 10:30:22.931043  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:22.931326  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:22.931374  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:23.430789  522370 type.go:168] "Request Body" body=""
	I1206 10:30:23.430884  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:23.431214  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:23.930931  522370 type.go:168] "Request Body" body=""
	I1206 10:30:23.931004  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:23.931354  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:24.430922  522370 type.go:168] "Request Body" body=""
	I1206 10:30:24.430996  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:24.431280  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:24.930769  522370 type.go:168] "Request Body" body=""
	I1206 10:30:24.930844  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:24.931166  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:25.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:30:25.430829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:25.431168  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:25.431230  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:25.931005  522370 type.go:168] "Request Body" body=""
	I1206 10:30:25.931194  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:25.932226  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:26.005763  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:26.074782  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:26.074834  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:26.074855  522370 retry.go:31] will retry after 34.92165894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:26.431288  522370 type.go:168] "Request Body" body=""
	I1206 10:30:26.431390  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:26.431714  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:26.931353  522370 type.go:168] "Request Body" body=""
	I1206 10:30:26.931426  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:26.931758  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:27.431397  522370 type.go:168] "Request Body" body=""
	I1206 10:30:27.431473  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:27.431770  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:27.431821  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:27.931640  522370 type.go:168] "Request Body" body=""
	I1206 10:30:27.931715  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:27.932047  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:28.431697  522370 type.go:168] "Request Body" body=""
	I1206 10:30:28.431771  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:28.432103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:28.930727  522370 type.go:168] "Request Body" body=""
	I1206 10:30:28.930800  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:28.931097  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:29.430756  522370 type.go:168] "Request Body" body=""
	I1206 10:30:29.430856  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:29.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:29.930774  522370 type.go:168] "Request Body" body=""
	I1206 10:30:29.930850  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:29.931176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:29.931223  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:30.430841  522370 type.go:168] "Request Body" body=""
	I1206 10:30:30.430907  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:30.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:30.930749  522370 type.go:168] "Request Body" body=""
	I1206 10:30:30.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:30.931181  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:31.430916  522370 type.go:168] "Request Body" body=""
	I1206 10:30:31.431010  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:31.431428  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:31.931099  522370 type.go:168] "Request Body" body=""
	I1206 10:30:31.931194  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:31.931454  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:31.931504  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:32.431294  522370 type.go:168] "Request Body" body=""
	I1206 10:30:32.431377  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:32.431741  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:32.931507  522370 type.go:168] "Request Body" body=""
	I1206 10:30:32.931587  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:32.931910  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:33.431612  522370 type.go:168] "Request Body" body=""
	I1206 10:30:33.431689  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:33.431967  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:33.930699  522370 type.go:168] "Request Body" body=""
	I1206 10:30:33.930774  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:33.931115  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:34.430875  522370 type.go:168] "Request Body" body=""
	I1206 10:30:34.430956  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:34.431328  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:34.431399  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:34.930728  522370 type.go:168] "Request Body" body=""
	I1206 10:30:34.930826  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:34.931100  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:35.430769  522370 type.go:168] "Request Body" body=""
	I1206 10:30:35.430844  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:35.431198  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:35.930924  522370 type.go:168] "Request Body" body=""
	I1206 10:30:35.931010  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:35.931368  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:36.431048  522370 type.go:168] "Request Body" body=""
	I1206 10:30:36.431167  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:36.431482  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:36.431535  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:36.931272  522370 type.go:168] "Request Body" body=""
	I1206 10:30:36.931345  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:36.931668  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:37.431458  522370 type.go:168] "Request Body" body=""
	I1206 10:30:37.431553  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:37.431867  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:37.931612  522370 type.go:168] "Request Body" body=""
	I1206 10:30:37.931682  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:37.932028  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:38.430754  522370 type.go:168] "Request Body" body=""
	I1206 10:30:38.430831  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:38.431203  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:38.930759  522370 type.go:168] "Request Body" body=""
	I1206 10:30:38.930834  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:38.931173  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:38.931244  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:39.430719  522370 type.go:168] "Request Body" body=""
	I1206 10:30:39.430798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:39.431104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:39.930841  522370 type.go:168] "Request Body" body=""
	I1206 10:30:39.930938  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:39.931315  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:40.431028  522370 type.go:168] "Request Body" body=""
	I1206 10:30:40.431104  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:40.431481  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:40.931230  522370 type.go:168] "Request Body" body=""
	I1206 10:30:40.931298  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:40.931552  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:40.931592  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:41.431348  522370 type.go:168] "Request Body" body=""
	I1206 10:30:41.431446  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:41.431767  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:41.931566  522370 type.go:168] "Request Body" body=""
	I1206 10:30:41.931647  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:41.931976  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:42.431636  522370 type.go:168] "Request Body" body=""
	I1206 10:30:42.431716  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:42.431988  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:42.930987  522370 type.go:168] "Request Body" body=""
	I1206 10:30:42.931066  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:42.931431  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:43.431209  522370 type.go:168] "Request Body" body=""
	I1206 10:30:43.431287  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:43.431648  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:43.431703  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:43.931389  522370 type.go:168] "Request Body" body=""
	I1206 10:30:43.931457  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:43.931727  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:44.431509  522370 type.go:168] "Request Body" body=""
	I1206 10:30:44.431583  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:44.431898  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:44.930652  522370 type.go:168] "Request Body" body=""
	I1206 10:30:44.930726  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:44.931043  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:45.430750  522370 type.go:168] "Request Body" body=""
	I1206 10:30:45.430832  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:45.431185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:45.930735  522370 type.go:168] "Request Body" body=""
	I1206 10:30:45.930816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:45.931167  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:45.931245  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:46.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:30:46.430992  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:46.431364  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:46.930735  522370 type.go:168] "Request Body" body=""
	I1206 10:30:46.930830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:46.931154  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:47.430792  522370 type.go:168] "Request Body" body=""
	I1206 10:30:47.430873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:47.431273  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:47.931290  522370 type.go:168] "Request Body" body=""
	I1206 10:30:47.931389  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:47.931707  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:47.931764  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:48.431531  522370 type.go:168] "Request Body" body=""
	I1206 10:30:48.431600  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:48.431884  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:48.931635  522370 type.go:168] "Request Body" body=""
	I1206 10:30:48.931707  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:48.932051  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:49.430636  522370 type.go:168] "Request Body" body=""
	I1206 10:30:49.430720  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:49.431043  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:49.930721  522370 type.go:168] "Request Body" body=""
	I1206 10:30:49.930793  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:49.931074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:50.430687  522370 type.go:168] "Request Body" body=""
	I1206 10:30:50.430783  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:50.431076  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:50.431162  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:50.930764  522370 type.go:168] "Request Body" body=""
	I1206 10:30:50.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:50.931221  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.430755  522370 type.go:168] "Request Body" body=""
	I1206 10:30:51.430826  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:51.431099  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.930829  522370 type.go:168] "Request Body" body=""
	I1206 10:30:51.930912  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:51.931261  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:52.430981  522370 type.go:168] "Request Body" body=""
	I1206 10:30:52.431081  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:52.431382  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:52.431432  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:52.931312  522370 type.go:168] "Request Body" body=""
	I1206 10:30:52.931405  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:52.931664  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.430694  522370 type.go:168] "Request Body" body=""
	I1206 10:30:53.430779  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:53.431113  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.930852  522370 type.go:168] "Request Body" body=""
	I1206 10:30:53.930925  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:53.931259  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:54.430827  522370 type.go:168] "Request Body" body=""
	I1206 10:30:54.430913  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:54.431229  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:54.930765  522370 type.go:168] "Request Body" body=""
	I1206 10:30:54.930847  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:54.931199  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:54.931254  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:55.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:30:55.431006  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:55.431312  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:55.930988  522370 type.go:168] "Request Body" body=""
	I1206 10:30:55.931078  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:55.931370  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:56.430800  522370 type.go:168] "Request Body" body=""
	I1206 10:30:56.430873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:56.431230  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:56.930928  522370 type.go:168] "Request Body" body=""
	I1206 10:30:56.931021  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:56.931336  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:56.931382  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:57.430743  522370 type.go:168] "Request Body" body=""
	I1206 10:30:57.430812  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:57.431182  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:57.823985  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:57.887311  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:57.891368  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:57.891481  522370 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:30:57.930973  522370 type.go:168] "Request Body" body=""
	I1206 10:30:57.931045  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:57.931345  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:58.430767  522370 type.go:168] "Request Body" body=""
	I1206 10:30:58.430847  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:58.431185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:58.930711  522370 type.go:168] "Request Body" body=""
	I1206 10:30:58.930784  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:58.931072  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:59.430808  522370 type.go:168] "Request Body" body=""
	I1206 10:30:59.430894  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:59.431255  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:59.431320  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:59.930807  522370 type.go:168] "Request Body" body=""
	I1206 10:30:59.930882  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:59.931248  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:00.430992  522370 type.go:168] "Request Body" body=""
	I1206 10:31:00.431085  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:00.431404  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:00.930779  522370 type.go:168] "Request Body" body=""
	I1206 10:31:00.930858  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:00.931174  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:00.997513  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:01.064863  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:01.068488  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:01.068586  522370 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:31:01.073496  522370 out.go:179] * Enabled addons: 
	I1206 10:31:01.076263  522370 addons.go:530] duration metric: took 1m33.142805076s for enable addons: enabled=[]
	I1206 10:31:01.430965  522370 type.go:168] "Request Body" body=""
	I1206 10:31:01.431062  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:01.431429  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:01.431491  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:01.930728  522370 type.go:168] "Request Body" body=""
	I1206 10:31:01.930813  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:01.931075  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:02.430719  522370 type.go:168] "Request Body" body=""
	I1206 10:31:02.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:02.431170  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:02.931199  522370 type.go:168] "Request Body" body=""
	I1206 10:31:02.931311  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:02.931626  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:03.431408  522370 type.go:168] "Request Body" body=""
	I1206 10:31:03.431503  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:03.431775  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:03.431826  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:03.931635  522370 type.go:168] "Request Body" body=""
	I1206 10:31:03.931714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:03.932077  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:04.430812  522370 type.go:168] "Request Body" body=""
	I1206 10:31:04.430889  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:04.431222  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:04.930928  522370 type.go:168] "Request Body" body=""
	I1206 10:31:04.931001  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:04.931294  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:05.430732  522370 type.go:168] "Request Body" body=""
	I1206 10:31:05.430807  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:05.431205  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:05.930777  522370 type.go:168] "Request Body" body=""
	I1206 10:31:05.930859  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:05.931245  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:05.931317  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:06.430961  522370 type.go:168] "Request Body" body=""
	I1206 10:31:06.431031  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:06.431335  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:06.930771  522370 type.go:168] "Request Body" body=""
	I1206 10:31:06.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:06.931212  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:07.430742  522370 type.go:168] "Request Body" body=""
	I1206 10:31:07.430822  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:07.431109  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:07.931277  522370 type.go:168] "Request Body" body=""
	I1206 10:31:07.931353  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:07.931638  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:07.931679  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:08.431521  522370 type.go:168] "Request Body" body=""
	I1206 10:31:08.431597  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:08.431952  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:08.930701  522370 type.go:168] "Request Body" body=""
	I1206 10:31:08.930775  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:08.931170  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:09.430860  522370 type.go:168] "Request Body" body=""
	I1206 10:31:09.430943  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:09.431243  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:09.930955  522370 type.go:168] "Request Body" body=""
	I1206 10:31:09.931035  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:09.931420  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:10.430778  522370 type.go:168] "Request Body" body=""
	I1206 10:31:10.430853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:10.431208  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:10.431264  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:10.930903  522370 type.go:168] "Request Body" body=""
	I1206 10:31:10.930972  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:10.931257  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:11.430753  522370 type.go:168] "Request Body" body=""
	I1206 10:31:11.430831  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:11.431176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:11.930888  522370 type.go:168] "Request Body" body=""
	I1206 10:31:11.930965  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:11.931366  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:12.431043  522370 type.go:168] "Request Body" body=""
	I1206 10:31:12.431118  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:12.431399  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:12.431444  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:12.931357  522370 type.go:168] "Request Body" body=""
	I1206 10:31:12.931433  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:12.931800  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:13.431602  522370 type.go:168] "Request Body" body=""
	I1206 10:31:13.431680  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:13.432016  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:13.930772  522370 type.go:168] "Request Body" body=""
	I1206 10:31:13.930841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:13.931103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.430796  522370 type.go:168] "Request Body" body=""
	I1206 10:31:14.430893  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:14.431217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.930770  522370 type.go:168] "Request Body" body=""
	I1206 10:31:14.930849  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:14.931219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:14.931279  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:15.430782  522370 type.go:168] "Request Body" body=""
	I1206 10:31:15.430850  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:15.431157  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:15.930757  522370 type.go:168] "Request Body" body=""
	I1206 10:31:15.930829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:15.931193  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:16.430753  522370 type.go:168] "Request Body" body=""
	I1206 10:31:16.430830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:16.431177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:16.930737  522370 type.go:168] "Request Body" body=""
	I1206 10:31:16.930808  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:16.931093  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:17.430742  522370 type.go:168] "Request Body" body=""
	I1206 10:31:17.430824  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:17.431217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:17.431272  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:17.931342  522370 type.go:168] "Request Body" body=""
	I1206 10:31:17.931425  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:17.931778  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:18.431537  522370 type.go:168] "Request Body" body=""
	I1206 10:31:18.431605  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:18.431868  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:18.930645  522370 type.go:168] "Request Body" body=""
	I1206 10:31:18.930720  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:18.931093  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:19.430810  522370 type.go:168] "Request Body" body=""
	I1206 10:31:19.430884  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:19.431254  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:19.431307  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:19.930711  522370 type.go:168] "Request Body" body=""
	I1206 10:31:19.930781  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:19.931116  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:20.430790  522370 type.go:168] "Request Body" body=""
	I1206 10:31:20.430893  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:20.431290  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:20.931043  522370 type.go:168] "Request Body" body=""
	I1206 10:31:20.931148  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:20.931503  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:21.431268  522370 type.go:168] "Request Body" body=""
	I1206 10:31:21.431356  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:21.431682  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:21.431723  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:21.931490  522370 type.go:168] "Request Body" body=""
	I1206 10:31:21.931570  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:21.931895  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:22.431704  522370 type.go:168] "Request Body" body=""
	I1206 10:31:22.431783  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:22.432137  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:22.930934  522370 type.go:168] "Request Body" body=""
	I1206 10:31:22.931013  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:22.931330  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:23.430728  522370 type.go:168] "Request Body" body=""
	I1206 10:31:23.430800  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:23.431163  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:23.930907  522370 type.go:168] "Request Body" body=""
	I1206 10:31:23.931011  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:23.931347  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:23.931408  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:24.430723  522370 type.go:168] "Request Body" body=""
	I1206 10:31:24.430793  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:24.431100  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:24.930781  522370 type.go:168] "Request Body" body=""
	I1206 10:31:24.930881  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:24.931205  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:25.430719  522370 type.go:168] "Request Body" body=""
	I1206 10:31:25.430793  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:25.431146  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:25.930743  522370 type.go:168] "Request Body" body=""
	I1206 10:31:25.930825  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:25.931098  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:26.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:31:26.430853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:26.431230  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:26.431285  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:26.930800  522370 type.go:168] "Request Body" body=""
	I1206 10:31:26.930898  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:26.931198  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:27.431688  522370 type.go:168] "Request Body" body=""
	I1206 10:31:27.431783  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:27.432074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:27.931195  522370 type.go:168] "Request Body" body=""
	I1206 10:31:27.931291  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:27.931692  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:28.431526  522370 type.go:168] "Request Body" body=""
	I1206 10:31:28.431657  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:28.432017  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:28.432087  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:28.930685  522370 type.go:168] "Request Body" body=""
	I1206 10:31:28.930798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:28.931176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:29.430715  522370 type.go:168] "Request Body" body=""
	I1206 10:31:29.430787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:29.431113  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:29.930720  522370 type.go:168] "Request Body" body=""
	I1206 10:31:29.930795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:29.931147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:30.430735  522370 type.go:168] "Request Body" body=""
	I1206 10:31:30.430809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:30.431203  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:30.930763  522370 type.go:168] "Request Body" body=""
	I1206 10:31:30.930838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:30.931220  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:30.931276  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:31.430923  522370 type.go:168] "Request Body" body=""
	I1206 10:31:31.430999  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:31.431356  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:31.931034  522370 type.go:168] "Request Body" body=""
	I1206 10:31:31.931102  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:31.931394  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:32.430894  522370 type.go:168] "Request Body" body=""
	I1206 10:31:32.430974  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:32.431350  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:32.931206  522370 type.go:168] "Request Body" body=""
	I1206 10:31:32.931296  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:32.931626  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:32.931683  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:33.431202  522370 type.go:168] "Request Body" body=""
	I1206 10:31:33.431271  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:33.431607  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:33.931401  522370 type.go:168] "Request Body" body=""
	I1206 10:31:33.931476  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:33.931817  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:34.431625  522370 type.go:168] "Request Body" body=""
	I1206 10:31:34.431714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:34.432035  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:34.931669  522370 type.go:168] "Request Body" body=""
	I1206 10:31:34.931742  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:34.932009  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:34.932053  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:35.430771  522370 type.go:168] "Request Body" body=""
	I1206 10:31:35.430852  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:35.431237  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:35.930935  522370 type.go:168] "Request Body" body=""
	I1206 10:31:35.931012  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:35.931347  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:36.430721  522370 type.go:168] "Request Body" body=""
	I1206 10:31:36.430797  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:36.431104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:36.930741  522370 type.go:168] "Request Body" body=""
	I1206 10:31:36.930820  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:36.931208  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:37.430713  522370 type.go:168] "Request Body" body=""
	I1206 10:31:37.430790  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:37.431167  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:37.431222  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:37.931252  522370 type.go:168] "Request Body" body=""
	I1206 10:31:37.931330  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:37.931655  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:38.431472  522370 type.go:168] "Request Body" body=""
	I1206 10:31:38.431546  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:38.431863  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:38.930659  522370 type.go:168] "Request Body" body=""
	I1206 10:31:38.930734  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:38.931062  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:39.430764  522370 type.go:168] "Request Body" body=""
	I1206 10:31:39.430838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:39.431171  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:39.930872  522370 type.go:168] "Request Body" body=""
	I1206 10:31:39.931015  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:39.931393  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:39.931453  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:40.431186  522370 type.go:168] "Request Body" body=""
	I1206 10:31:40.431263  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:40.431606  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:40.931379  522370 type.go:168] "Request Body" body=""
	I1206 10:31:40.931446  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:40.931701  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:41.431485  522370 type.go:168] "Request Body" body=""
	I1206 10:31:41.431564  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:41.431887  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:41.930643  522370 type.go:168] "Request Body" body=""
	I1206 10:31:41.930718  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:41.931057  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:42.430753  522370 type.go:168] "Request Body" body=""
	I1206 10:31:42.430823  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:42.431171  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:42.431219  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:42.931185  522370 type.go:168] "Request Body" body=""
	I1206 10:31:42.931265  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:42.931600  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:43.431298  522370 type.go:168] "Request Body" body=""
	I1206 10:31:43.431370  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:43.431690  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:43.931472  522370 type.go:168] "Request Body" body=""
	I1206 10:31:43.931550  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:43.931859  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:44.431577  522370 type.go:168] "Request Body" body=""
	I1206 10:31:44.431700  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:44.432084  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:44.432138  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:44.930770  522370 type.go:168] "Request Body" body=""
	I1206 10:31:44.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:44.931206  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:45.430733  522370 type.go:168] "Request Body" body=""
	I1206 10:31:45.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:45.431161  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:45.930853  522370 type.go:168] "Request Body" body=""
	I1206 10:31:45.930932  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:45.931318  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:46.430766  522370 type.go:168] "Request Body" body=""
	I1206 10:31:46.430845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:46.431204  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:46.930747  522370 type.go:168] "Request Body" body=""
	I1206 10:31:46.930820  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:46.931099  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:46.931170  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:47.430770  522370 type.go:168] "Request Body" body=""
	I1206 10:31:47.430858  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:47.431194  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:47.931329  522370 type.go:168] "Request Body" body=""
	I1206 10:31:47.931412  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:47.931751  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:48.431557  522370 type.go:168] "Request Body" body=""
	I1206 10:31:48.431630  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:48.431921  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:48.930683  522370 type.go:168] "Request Body" body=""
	I1206 10:31:48.930756  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:48.931083  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:49.430810  522370 type.go:168] "Request Body" body=""
	I1206 10:31:49.430898  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:49.431254  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:49.431313  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:49.930720  522370 type.go:168] "Request Body" body=""
	I1206 10:31:49.930793  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:49.931110  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:50.430770  522370 type.go:168] "Request Body" body=""
	I1206 10:31:50.430874  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:50.431234  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:50.931041  522370 type.go:168] "Request Body" body=""
	I1206 10:31:50.931153  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:50.931493  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:51.431234  522370 type.go:168] "Request Body" body=""
	I1206 10:31:51.431312  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:51.431631  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:51.431691  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:51.931489  522370 type.go:168] "Request Body" body=""
	I1206 10:31:51.931580  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:51.931981  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:52.430704  522370 type.go:168] "Request Body" body=""
	I1206 10:31:52.430806  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:52.431144  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:52.930913  522370 type.go:168] "Request Body" body=""
	I1206 10:31:52.930987  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:52.931309  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:53.430741  522370 type.go:168] "Request Body" body=""
	I1206 10:31:53.430813  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:53.431186  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:53.930898  522370 type.go:168] "Request Body" body=""
	I1206 10:31:53.930988  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:53.931350  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:53.931408  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:54.431065  522370 type.go:168] "Request Body" body=""
	I1206 10:31:54.431152  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:54.431403  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:54.931103  522370 type.go:168] "Request Body" body=""
	I1206 10:31:54.931201  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:54.931542  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.431350  522370 type.go:168] "Request Body" body=""
	I1206 10:31:55.431428  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:55.431748  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.931464  522370 type.go:168] "Request Body" body=""
	I1206 10:31:55.931536  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:55.931792  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:55.931832  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:56.431629  522370 type.go:168] "Request Body" body=""
	I1206 10:31:56.431704  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:56.432065  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:56.930782  522370 type.go:168] "Request Body" body=""
	I1206 10:31:56.930863  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:56.931219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:57.430905  522370 type.go:168] "Request Body" body=""
	I1206 10:31:57.430978  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:57.431276  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:57.931572  522370 type.go:168] "Request Body" body=""
	I1206 10:31:57.931656  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:57.931998  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:57.932052  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:58.430762  522370 type.go:168] "Request Body" body=""
	I1206 10:31:58.430841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:58.431216  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:58.930737  522370 type.go:168] "Request Body" body=""
	I1206 10:31:58.930807  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:58.931055  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:59.430703  522370 type.go:168] "Request Body" body=""
	I1206 10:31:59.430788  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:59.431185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:59.930748  522370 type.go:168] "Request Body" body=""
	I1206 10:31:59.930832  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:59.931193  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:00.430923  522370 type.go:168] "Request Body" body=""
	I1206 10:32:00.431018  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:00.431383  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:00.431435  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:00.930749  522370 type.go:168] "Request Body" body=""
	I1206 10:32:00.930823  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:00.931167  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:01.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:32:01.430987  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:01.431290  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:01.930735  522370 type.go:168] "Request Body" body=""
	I1206 10:32:01.930846  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:01.931177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:02.430793  522370 type.go:168] "Request Body" body=""
	I1206 10:32:02.430870  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:02.431209  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:02.931198  522370 type.go:168] "Request Body" body=""
	I1206 10:32:02.931274  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:02.931612  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:02.931666  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:03.431269  522370 type.go:168] "Request Body" body=""
	I1206 10:32:03.431341  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:03.431598  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:03.931409  522370 type.go:168] "Request Body" body=""
	I1206 10:32:03.931493  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:03.931843  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:04.431512  522370 type.go:168] "Request Body" body=""
	I1206 10:32:04.431588  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:04.431937  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:04.930649  522370 type.go:168] "Request Body" body=""
	I1206 10:32:04.930727  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:04.930996  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:05.430715  522370 type.go:168] "Request Body" body=""
	I1206 10:32:05.430789  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:05.431147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:05.431201  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:05.930878  522370 type.go:168] "Request Body" body=""
	I1206 10:32:05.930961  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:05.931320  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:06.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:32:06.430798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:06.431112  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:06.930766  522370 type.go:168] "Request Body" body=""
	I1206 10:32:06.930839  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:06.931201  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:07.430760  522370 type.go:168] "Request Body" body=""
	I1206 10:32:07.430842  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:07.431197  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:07.431255  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:07.931421  522370 type.go:168] "Request Body" body=""
	I1206 10:32:07.931493  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:07.931819  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:08.431684  522370 type.go:168] "Request Body" body=""
	I1206 10:32:08.431770  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:08.432111  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:08.930828  522370 type.go:168] "Request Body" body=""
	I1206 10:32:08.930926  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:08.931327  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:09.430732  522370 type.go:168] "Request Body" body=""
	I1206 10:32:09.430804  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:09.431070  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:09.930755  522370 type.go:168] "Request Body" body=""
	I1206 10:32:09.930836  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:09.931203  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:09.931266  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:10.430768  522370 type.go:168] "Request Body" body=""
	I1206 10:32:10.430879  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:10.431217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:10.930889  522370 type.go:168] "Request Body" body=""
	I1206 10:32:10.930960  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:10.931259  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:11.430792  522370 type.go:168] "Request Body" body=""
	I1206 10:32:11.430872  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:11.431253  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:11.930964  522370 type.go:168] "Request Body" body=""
	I1206 10:32:11.931039  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:11.931369  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:11.931419  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:12.430849  522370 type.go:168] "Request Body" body=""
	I1206 10:32:12.430927  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:12.431323  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:12.931326  522370 type.go:168] "Request Body" body=""
	I1206 10:32:12.931399  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:12.931728  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:13.431494  522370 type.go:168] "Request Body" body=""
	I1206 10:32:13.431575  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:13.431906  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:13.931683  522370 type.go:168] "Request Body" body=""
	I1206 10:32:13.931761  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:13.932130  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:13.932175  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:14.430772  522370 type.go:168] "Request Body" body=""
	I1206 10:32:14.430846  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:14.431201  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:14.930793  522370 type.go:168] "Request Body" body=""
	I1206 10:32:14.930874  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:14.931260  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:15.430763  522370 type.go:168] "Request Body" body=""
	I1206 10:32:15.430896  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:15.431300  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:15.930804  522370 type.go:168] "Request Body" body=""
	I1206 10:32:15.930877  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:15.931264  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:16.431005  522370 type.go:168] "Request Body" body=""
	I1206 10:32:16.431079  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:16.431470  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:16.431521  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:16.931228  522370 type.go:168] "Request Body" body=""
	I1206 10:32:16.931295  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:16.931553  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:17.431420  522370 type.go:168] "Request Body" body=""
	I1206 10:32:17.431494  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:17.431814  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:17.930648  522370 type.go:168] "Request Body" body=""
	I1206 10:32:17.930727  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:17.931063  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:18.430755  522370 type.go:168] "Request Body" body=""
	I1206 10:32:18.430879  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:18.431264  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:18.930777  522370 type.go:168] "Request Body" body=""
	I1206 10:32:18.930861  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:18.931245  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:18.931300  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:19.430824  522370 type.go:168] "Request Body" body=""
	I1206 10:32:19.430907  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:19.431295  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:19.930976  522370 type.go:168] "Request Body" body=""
	I1206 10:32:19.931045  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:19.931324  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:20.431030  522370 type.go:168] "Request Body" body=""
	I1206 10:32:20.431116  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:20.431515  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:20.931305  522370 type.go:168] "Request Body" body=""
	I1206 10:32:20.931378  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:20.931713  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:20.931766  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:21.431535  522370 type.go:168] "Request Body" body=""
	I1206 10:32:21.431650  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:21.431909  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:21.931668  522370 type.go:168] "Request Body" body=""
	I1206 10:32:21.931751  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:21.932103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:22.430721  522370 type.go:168] "Request Body" body=""
	I1206 10:32:22.430809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:22.431161  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:22.931142  522370 type.go:168] "Request Body" body=""
	I1206 10:32:22.931211  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:22.931472  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:23.431308  522370 type.go:168] "Request Body" body=""
	I1206 10:32:23.431380  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:23.431717  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:23.431770  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:23.931606  522370 type.go:168] "Request Body" body=""
	I1206 10:32:23.931684  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:23.932028  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:24.430722  522370 type.go:168] "Request Body" body=""
	I1206 10:32:24.430852  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:24.431235  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:24.930776  522370 type.go:168] "Request Body" body=""
	I1206 10:32:24.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:24.931200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:25.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:32:25.430855  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:25.431194  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:25.930892  522370 type.go:168] "Request Body" body=""
	I1206 10:32:25.930959  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:25.931238  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:25.931278  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:26.430787  522370 type.go:168] "Request Body" body=""
	I1206 10:32:26.430873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:26.431231  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:26.930954  522370 type.go:168] "Request Body" body=""
	I1206 10:32:26.931033  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:26.931398  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:27.431111  522370 type.go:168] "Request Body" body=""
	I1206 10:32:27.431201  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:27.431504  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:27.931658  522370 type.go:168] "Request Body" body=""
	I1206 10:32:27.931732  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:27.932069  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:27.932132  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:28.430761  522370 type.go:168] "Request Body" body=""
	I1206 10:32:28.430838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:28.431176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:28.930711  522370 type.go:168] "Request Body" body=""
	I1206 10:32:28.930783  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:28.931095  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:29.430760  522370 type.go:168] "Request Body" body=""
	I1206 10:32:29.430841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:29.431191  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:29.930785  522370 type.go:168] "Request Body" body=""
	I1206 10:32:29.930863  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:29.931232  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:30.430912  522370 type.go:168] "Request Body" body=""
	I1206 10:32:30.430988  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:30.431300  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:30.431356  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:30.930756  522370 type.go:168] "Request Body" body=""
	I1206 10:32:30.930830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:30.931179  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:31.430752  522370 type.go:168] "Request Body" body=""
	I1206 10:32:31.430836  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:31.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:31.930917  522370 type.go:168] "Request Body" body=""
	I1206 10:32:31.930986  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:31.931271  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:32.430794  522370 type.go:168] "Request Body" body=""
	I1206 10:32:32.430881  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:32.431249  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:32.931263  522370 type.go:168] "Request Body" body=""
	I1206 10:32:32.931386  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:32.931723  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:32.931782  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:33.431496  522370 type.go:168] "Request Body" body=""
	I1206 10:32:33.431581  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:33.431932  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:33.931657  522370 type.go:168] "Request Body" body=""
	I1206 10:32:33.931736  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:33.932091  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:34.430713  522370 type.go:168] "Request Body" body=""
	I1206 10:32:34.430790  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:34.431152  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:34.930699  522370 type.go:168] "Request Body" body=""
	I1206 10:32:34.930768  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:34.931073  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:35.430758  522370 type.go:168] "Request Body" body=""
	I1206 10:32:35.430837  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:35.431193  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:35.431247  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:35.930738  522370 type.go:168] "Request Body" body=""
	I1206 10:32:35.930816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:35.931165  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:36.430720  522370 type.go:168] "Request Body" body=""
	I1206 10:32:36.430791  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:36.431113  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:36.930734  522370 type.go:168] "Request Body" body=""
	I1206 10:32:36.930816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:36.931108  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:37.430776  522370 type.go:168] "Request Body" body=""
	I1206 10:32:37.430857  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:37.431190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:37.931387  522370 type.go:168] "Request Body" body=""
	I1206 10:32:37.931455  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:37.931795  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:37.931855  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:38.431623  522370 type.go:168] "Request Body" body=""
	I1206 10:32:38.431705  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:38.432052  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:38.930772  522370 type.go:168] "Request Body" body=""
	I1206 10:32:38.930850  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:38.931196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:39.430728  522370 type.go:168] "Request Body" body=""
	I1206 10:32:39.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:39.431143  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:39.930742  522370 type.go:168] "Request Body" body=""
	I1206 10:32:39.930822  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:39.931187  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:40.430754  522370 type.go:168] "Request Body" body=""
	I1206 10:32:40.430831  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:40.431262  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:40.431317  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:40.930972  522370 type.go:168] "Request Body" body=""
	I1206 10:32:40.931048  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:40.931346  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:41.430760  522370 type.go:168] "Request Body" body=""
	I1206 10:32:41.430833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:41.431192  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:41.930757  522370 type.go:168] "Request Body" body=""
	I1206 10:32:41.930829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:41.931180  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:42.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:32:42.430816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:42.431140  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:42.931171  522370 type.go:168] "Request Body" body=""
	I1206 10:32:42.931246  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:42.931610  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:42.931666  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:43.431315  522370 type.go:168] "Request Body" body=""
	I1206 10:32:43.431391  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:43.431734  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:43.931465  522370 type.go:168] "Request Body" body=""
	I1206 10:32:43.931536  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:43.931803  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:44.431545  522370 type.go:168] "Request Body" body=""
	I1206 10:32:44.431622  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:44.431960  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:44.931639  522370 type.go:168] "Request Body" body=""
	I1206 10:32:44.931734  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:44.932055  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:44.932114  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:45.430772  522370 type.go:168] "Request Body" body=""
	I1206 10:32:45.430845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:45.431116  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:45.930776  522370 type.go:168] "Request Body" body=""
	I1206 10:32:45.930868  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:45.931291  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:46.430766  522370 type.go:168] "Request Body" body=""
	I1206 10:32:46.430841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:46.431191  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:46.930863  522370 type.go:168] "Request Body" body=""
	I1206 10:32:46.930930  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:46.931212  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:47.430796  522370 type.go:168] "Request Body" body=""
	I1206 10:32:47.430887  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:47.431295  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:47.431359  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:47.931474  522370 type.go:168] "Request Body" body=""
	I1206 10:32:47.931560  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:47.931907  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:48.431677  522370 type.go:168] "Request Body" body=""
	I1206 10:32:48.431748  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:48.432085  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:48.930788  522370 type.go:168] "Request Body" body=""
	I1206 10:32:48.930871  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:48.931291  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:49.430869  522370 type.go:168] "Request Body" body=""
	I1206 10:32:49.430950  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:49.431291  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:49.930725  522370 type.go:168] "Request Body" body=""
	I1206 10:32:49.930794  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:49.931082  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:49.931153  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:50.430853  522370 type.go:168] "Request Body" body=""
	I1206 10:32:50.430949  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:50.431284  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:50.930721  522370 type.go:168] "Request Body" body=""
	I1206 10:32:50.930802  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:50.931146  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:51.430715  522370 type.go:168] "Request Body" body=""
	I1206 10:32:51.430795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:51.431104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:51.930818  522370 type.go:168] "Request Body" body=""
	I1206 10:32:51.930895  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:51.931285  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:51.931368  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:52.431078  522370 type.go:168] "Request Body" body=""
	I1206 10:32:52.431180  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:52.431482  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:52.931409  522370 type.go:168] "Request Body" body=""
	I1206 10:32:52.931482  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:52.931752  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:53.431547  522370 type.go:168] "Request Body" body=""
	I1206 10:32:53.431624  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:53.431945  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:53.930683  522370 type.go:168] "Request Body" body=""
	I1206 10:32:53.930759  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:53.931085  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:54.430729  522370 type.go:168] "Request Body" body=""
	I1206 10:32:54.430803  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:54.431094  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:54.431169  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:54.930721  522370 type.go:168] "Request Body" body=""
	I1206 10:32:54.930796  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:54.931156  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:55.430745  522370 type.go:168] "Request Body" body=""
	I1206 10:32:55.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:55.431164  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:55.930849  522370 type.go:168] "Request Body" body=""
	I1206 10:32:55.930915  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:55.931210  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:56.430891  522370 type.go:168] "Request Body" body=""
	I1206 10:32:56.430970  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:56.431338  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:56.431397  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:56.930910  522370 type.go:168] "Request Body" body=""
	I1206 10:32:56.930994  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:56.931313  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:57.430984  522370 type.go:168] "Request Body" body=""
	I1206 10:32:57.431057  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:57.431352  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:57.931626  522370 type.go:168] "Request Body" body=""
	I1206 10:32:57.931699  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:57.932050  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:58.430670  522370 type.go:168] "Request Body" body=""
	I1206 10:32:58.430747  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:58.431102  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:58.930730  522370 type.go:168] "Request Body" body=""
	I1206 10:32:58.930798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:58.931062  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:58.931101  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:59.430796  522370 type.go:168] "Request Body" body=""
	I1206 10:32:59.430871  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:59.431207  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:59.930920  522370 type.go:168] "Request Body" body=""
	I1206 10:32:59.930996  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:59.931373  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:00.431073  522370 type.go:168] "Request Body" body=""
	I1206 10:33:00.431174  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:00.431454  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:00.931159  522370 type.go:168] "Request Body" body=""
	I1206 10:33:00.931233  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:00.931593  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:00.931646  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:01.431429  522370 type.go:168] "Request Body" body=""
	I1206 10:33:01.431506  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:01.431854  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:01.931651  522370 type.go:168] "Request Body" body=""
	I1206 10:33:01.931722  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:01.932003  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:02.430670  522370 type.go:168] "Request Body" body=""
	I1206 10:33:02.430745  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:02.431108  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:02.930915  522370 type.go:168] "Request Body" body=""
	I1206 10:33:02.930990  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:02.931336  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:03.431009  522370 type.go:168] "Request Body" body=""
	I1206 10:33:03.431081  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:03.431417  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:03.431470  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:03.930755  522370 type.go:168] "Request Body" body=""
	I1206 10:33:03.930829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:03.931200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:04.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:33:04.430822  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:04.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:04.930891  522370 type.go:168] "Request Body" body=""
	I1206 10:33:04.930967  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:04.931354  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:05.430778  522370 type.go:168] "Request Body" body=""
	I1206 10:33:05.430860  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:05.431250  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:05.930755  522370 type.go:168] "Request Body" body=""
	I1206 10:33:05.930835  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:05.931189  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:05.931249  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:06.430896  522370 type.go:168] "Request Body" body=""
	I1206 10:33:06.430973  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:06.431278  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:06.930733  522370 type.go:168] "Request Body" body=""
	I1206 10:33:06.930807  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:06.931165  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:07.430744  522370 type.go:168] "Request Body" body=""
	I1206 10:33:07.430825  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:07.431177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:07.931223  522370 type.go:168] "Request Body" body=""
	I1206 10:33:07.931292  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:07.931564  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:07.931604  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:08.431432  522370 type.go:168] "Request Body" body=""
	I1206 10:33:08.431521  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:08.431859  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:08.931644  522370 type.go:168] "Request Body" body=""
	I1206 10:33:08.931724  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:08.932093  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:09.430763  522370 type.go:168] "Request Body" body=""
	I1206 10:33:09.430862  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:09.431255  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:09.930767  522370 type.go:168] "Request Body" body=""
	I1206 10:33:09.930849  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:09.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:10.430945  522370 type.go:168] "Request Body" body=""
	I1206 10:33:10.431022  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:10.431384  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:10.431441  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:10.931100  522370 type.go:168] "Request Body" body=""
	I1206 10:33:10.931186  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:10.931443  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:11.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:33:11.430818  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:11.431167  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:11.930886  522370 type.go:168] "Request Body" body=""
	I1206 10:33:11.930967  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:11.931341  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:12.431022  522370 type.go:168] "Request Body" body=""
	I1206 10:33:12.431093  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:12.431430  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:12.431487  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:12.931422  522370 type.go:168] "Request Body" body=""
	I1206 10:33:12.931498  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:12.931813  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:13.431634  522370 type.go:168] "Request Body" body=""
	I1206 10:33:13.431707  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:13.432041  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:13.930727  522370 type.go:168] "Request Body" body=""
	I1206 10:33:13.930806  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:13.931116  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:14.430764  522370 type.go:168] "Request Body" body=""
	I1206 10:33:14.430843  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:14.431197  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:14.930912  522370 type.go:168] "Request Body" body=""
	I1206 10:33:14.930993  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:14.931381  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:14.931437  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:15.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:33:15.430795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:15.431103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:15.930758  522370 type.go:168] "Request Body" body=""
	I1206 10:33:15.930830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:15.931180  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:16.430880  522370 type.go:168] "Request Body" body=""
	I1206 10:33:16.430966  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:16.431327  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:16.930724  522370 type.go:168] "Request Body" body=""
	I1206 10:33:16.930789  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:16.931103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:17.430923  522370 type.go:168] "Request Body" body=""
	I1206 10:33:17.430996  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:17.431378  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:17.431433  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:17.931311  522370 type.go:168] "Request Body" body=""
	I1206 10:33:17.931390  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:17.931703  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:18.431499  522370 type.go:168] "Request Body" body=""
	I1206 10:33:18.431573  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:18.431859  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:18.931659  522370 type.go:168] "Request Body" body=""
	I1206 10:33:18.931728  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:18.932101  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:19.430669  522370 type.go:168] "Request Body" body=""
	I1206 10:33:19.430749  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:19.431091  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:19.930819  522370 type.go:168] "Request Body" body=""
	I1206 10:33:19.930896  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:19.931201  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:19.931264  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:20.430727  522370 type.go:168] "Request Body" body=""
	I1206 10:33:20.430804  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:20.431145  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:20.930747  522370 type.go:168] "Request Body" body=""
	I1206 10:33:20.930830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:20.931225  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:21.430895  522370 type.go:168] "Request Body" body=""
	I1206 10:33:21.430968  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:21.431276  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:21.930735  522370 type.go:168] "Request Body" body=""
	I1206 10:33:21.930814  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:21.931153  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:22.430742  522370 type.go:168] "Request Body" body=""
	I1206 10:33:22.430815  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:22.431176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:22.431236  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:22.930959  522370 type.go:168] "Request Body" body=""
	I1206 10:33:22.931032  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:22.931315  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:23.430982  522370 type.go:168] "Request Body" body=""
	I1206 10:33:23.431057  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:23.431412  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:23.931141  522370 type.go:168] "Request Body" body=""
	I1206 10:33:23.931222  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:23.931520  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:24.431230  522370 type.go:168] "Request Body" body=""
	I1206 10:33:24.431303  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:24.431559  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:24.431598  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:24.931419  522370 type.go:168] "Request Body" body=""
	I1206 10:33:24.931497  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:24.931798  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:25.431590  522370 type.go:168] "Request Body" body=""
	I1206 10:33:25.431664  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:25.432003  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:25.930717  522370 type.go:168] "Request Body" body=""
	I1206 10:33:25.930787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:25.931105  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:26.430730  522370 type.go:168] "Request Body" body=""
	I1206 10:33:26.430803  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:26.431170  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:26.930777  522370 type.go:168] "Request Body" body=""
	I1206 10:33:26.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:26.931184  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:26.931237  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:27.430726  522370 type.go:168] "Request Body" body=""
	I1206 10:33:27.430818  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:27.431145  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:27.931181  522370 type.go:168] "Request Body" body=""
	I1206 10:33:27.931266  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:27.931566  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:28.431438  522370 type.go:168] "Request Body" body=""
	I1206 10:33:28.431510  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:28.431869  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:28.931537  522370 type.go:168] "Request Body" body=""
	I1206 10:33:28.931618  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:28.931903  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:28.931960  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:29.430674  522370 type.go:168] "Request Body" body=""
	I1206 10:33:29.430755  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:29.431137  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:29.930914  522370 type.go:168] "Request Body" body=""
	I1206 10:33:29.930990  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:29.931351  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:30.431026  522370 type.go:168] "Request Body" body=""
	I1206 10:33:30.431102  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:30.431376  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:30.930781  522370 type.go:168] "Request Body" body=""
	I1206 10:33:30.930873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:30.931192  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:31.430878  522370 type.go:168] "Request Body" body=""
	I1206 10:33:31.430956  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:31.431307  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:31.431363  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:31.930818  522370 type.go:168] "Request Body" body=""
	I1206 10:33:31.930894  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:31.931174  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:32.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:33:32.430850  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:32.431192  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:32.931213  522370 type.go:168] "Request Body" body=""
	I1206 10:33:32.931287  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:32.931623  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:33.431374  522370 type.go:168] "Request Body" body=""
	I1206 10:33:33.431441  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:33.431690  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:33.431729  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:33.931533  522370 type.go:168] "Request Body" body=""
	I1206 10:33:33.931612  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:33.931952  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:34.430686  522370 type.go:168] "Request Body" body=""
	I1206 10:33:34.430769  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:34.431100  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:34.930721  522370 type.go:168] "Request Body" body=""
	I1206 10:33:34.930796  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:34.931111  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:35.430773  522370 type.go:168] "Request Body" body=""
	I1206 10:33:35.430854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:35.431209  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:35.930752  522370 type.go:168] "Request Body" body=""
	I1206 10:33:35.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:35.931211  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:35.931270  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:36.430716  522370 type.go:168] "Request Body" body=""
	I1206 10:33:36.430789  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:36.431117  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:36.930838  522370 type.go:168] "Request Body" body=""
	I1206 10:33:36.930915  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:36.931278  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:37.430762  522370 type.go:168] "Request Body" body=""
	I1206 10:33:37.430839  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:37.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:37.931227  522370 type.go:168] "Request Body" body=""
	I1206 10:33:37.931308  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:37.931579  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:37.931629  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:38.431327  522370 type.go:168] "Request Body" body=""
	I1206 10:33:38.431398  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:38.431755  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:38.931430  522370 type.go:168] "Request Body" body=""
	I1206 10:33:38.931512  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:38.931837  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:39.431622  522370 type.go:168] "Request Body" body=""
	I1206 10:33:39.431687  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:39.431948  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:39.930714  522370 type.go:168] "Request Body" body=""
	I1206 10:33:39.930788  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:39.931147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:40.430846  522370 type.go:168] "Request Body" body=""
	I1206 10:33:40.430923  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:40.431265  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:40.431320  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:40.930719  522370 type.go:168] "Request Body" body=""
	I1206 10:33:40.930795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:40.931103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:41.430879  522370 type.go:168] "Request Body" body=""
	I1206 10:33:41.430958  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:41.431368  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:41.931083  522370 type.go:168] "Request Body" body=""
	I1206 10:33:41.931178  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:41.931515  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:42.431226  522370 type.go:168] "Request Body" body=""
	I1206 10:33:42.431297  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:42.431581  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:42.431622  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:42.931519  522370 type.go:168] "Request Body" body=""
	I1206 10:33:42.931593  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:42.931924  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:43.431683  522370 type.go:168] "Request Body" body=""
	I1206 10:33:43.431760  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:43.432078  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:43.930714  522370 type.go:168] "Request Body" body=""
	I1206 10:33:43.930784  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:43.931091  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:44.430727  522370 type.go:168] "Request Body" body=""
	I1206 10:33:44.430805  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:44.431177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:44.930741  522370 type.go:168] "Request Body" body=""
	I1206 10:33:44.930820  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:44.931173  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:44.931227  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:45.430724  522370 type.go:168] "Request Body" body=""
	I1206 10:33:45.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:45.431154  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:45.930742  522370 type.go:168] "Request Body" body=""
	I1206 10:33:45.930816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:45.931177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:46.430876  522370 type.go:168] "Request Body" body=""
	I1206 10:33:46.430959  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:46.431313  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:46.930987  522370 type.go:168] "Request Body" body=""
	I1206 10:33:46.931061  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:46.931413  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:46.931474  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:47.430745  522370 type.go:168] "Request Body" body=""
	I1206 10:33:47.430826  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:47.431205  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:47.931381  522370 type.go:168] "Request Body" body=""
	I1206 10:33:47.931468  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:47.931814  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:48.431456  522370 type.go:168] "Request Body" body=""
	I1206 10:33:48.431530  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:48.431817  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:48.931583  522370 type.go:168] "Request Body" body=""
	I1206 10:33:48.931659  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:48.932002  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:48.932055  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:49.431685  522370 type.go:168] "Request Body" body=""
	I1206 10:33:49.431764  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:49.432103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:49.930780  522370 type.go:168] "Request Body" body=""
	I1206 10:33:49.930855  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:49.931113  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:50.430744  522370 type.go:168] "Request Body" body=""
	I1206 10:33:50.430816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:50.431162  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:50.930749  522370 type.go:168] "Request Body" body=""
	I1206 10:33:50.930827  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:50.931179  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:51.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:33:51.430805  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:51.431143  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:51.431197  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:51.930877  522370 type.go:168] "Request Body" body=""
	I1206 10:33:51.930958  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:51.931307  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:52.431057  522370 type.go:168] "Request Body" body=""
	I1206 10:33:52.431157  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:52.431503  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:52.931288  522370 type.go:168] "Request Body" body=""
	I1206 10:33:52.931355  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:52.931612  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:53.431346  522370 type.go:168] "Request Body" body=""
	I1206 10:33:53.431421  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:53.431742  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:53.431799  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:53.931572  522370 type.go:168] "Request Body" body=""
	I1206 10:33:53.931647  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:53.931997  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:54.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:33:54.430806  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:54.431078  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:54.930776  522370 type.go:168] "Request Body" body=""
	I1206 10:33:54.930849  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:54.931177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:55.430770  522370 type.go:168] "Request Body" body=""
	I1206 10:33:55.430844  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:55.431172  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:55.930729  522370 type.go:168] "Request Body" body=""
	I1206 10:33:55.930801  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:55.931082  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:55.931151  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:56.430895  522370 type.go:168] "Request Body" body=""
	I1206 10:33:56.430967  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:56.431320  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:56.931032  522370 type.go:168] "Request Body" body=""
	I1206 10:33:56.931110  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:56.931459  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:57.430829  522370 type.go:168] "Request Body" body=""
	I1206 10:33:57.430902  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:57.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:57.931275  522370 type.go:168] "Request Body" body=""
	I1206 10:33:57.931349  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:57.931687  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:57.931743  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:58.431516  522370 type.go:168] "Request Body" body=""
	I1206 10:33:58.431614  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:58.431939  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:58.930634  522370 type.go:168] "Request Body" body=""
	I1206 10:33:58.930706  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:58.930955  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:59.430684  522370 type.go:168] "Request Body" body=""
	I1206 10:33:59.430758  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:59.431050  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:59.930637  522370 type.go:168] "Request Body" body=""
	I1206 10:33:59.930735  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:59.931074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:00.430781  522370 type.go:168] "Request Body" body=""
	I1206 10:34:00.430869  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:00.431217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:00.431315  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:00.930729  522370 type.go:168] "Request Body" body=""
	I1206 10:34:00.930820  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:00.931148  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:01.430848  522370 type.go:168] "Request Body" body=""
	I1206 10:34:01.430922  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:01.431286  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:01.930713  522370 type.go:168] "Request Body" body=""
	I1206 10:34:01.930791  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:01.931110  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:02.430761  522370 type.go:168] "Request Body" body=""
	I1206 10:34:02.430835  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:02.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:02.931031  522370 type.go:168] "Request Body" body=""
	I1206 10:34:02.931109  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:02.931453  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:02.931514  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:03.430966  522370 type.go:168] "Request Body" body=""
	I1206 10:34:03.431062  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:03.431375  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:03.930731  522370 type.go:168] "Request Body" body=""
	I1206 10:34:03.930814  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:03.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:04.430751  522370 type.go:168] "Request Body" body=""
	I1206 10:34:04.430825  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:04.431168  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:04.930717  522370 type.go:168] "Request Body" body=""
	I1206 10:34:04.930787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:04.931097  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:05.430797  522370 type.go:168] "Request Body" body=""
	I1206 10:34:05.430873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:05.431234  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:05.431295  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:05.930980  522370 type.go:168] "Request Body" body=""
	I1206 10:34:05.931058  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:05.931414  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:06.430713  522370 type.go:168] "Request Body" body=""
	I1206 10:34:06.430787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:06.431089  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:06.930764  522370 type.go:168] "Request Body" body=""
	I1206 10:34:06.930844  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:06.931244  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:07.430820  522370 type.go:168] "Request Body" body=""
	I1206 10:34:07.430894  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:07.431251  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:07.931445  522370 type.go:168] "Request Body" body=""
	I1206 10:34:07.931516  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:07.931771  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:07.931812  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:08.431524  522370 type.go:168] "Request Body" body=""
	I1206 10:34:08.431601  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:08.431921  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:08.930678  522370 type.go:168] "Request Body" body=""
	I1206 10:34:08.930767  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:08.931174  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:09.430817  522370 type.go:168] "Request Body" body=""
	I1206 10:34:09.430892  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:09.431194  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:09.930925  522370 type.go:168] "Request Body" body=""
	I1206 10:34:09.931018  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:09.931371  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:10.430770  522370 type.go:168] "Request Body" body=""
	I1206 10:34:10.430853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:10.431202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:10.431255  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:10.930764  522370 type.go:168] "Request Body" body=""
	I1206 10:34:10.930831  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:10.931090  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:11.430809  522370 type.go:168] "Request Body" body=""
	I1206 10:34:11.430882  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:11.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:11.930776  522370 type.go:168] "Request Body" body=""
	I1206 10:34:11.930851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:11.931212  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:12.430751  522370 type.go:168] "Request Body" body=""
	I1206 10:34:12.430822  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:12.431076  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:12.930963  522370 type.go:168] "Request Body" body=""
	I1206 10:34:12.931034  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:12.931391  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:12.931447  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:13.430984  522370 type.go:168] "Request Body" body=""
	I1206 10:34:13.431059  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:13.431405  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:13.930730  522370 type.go:168] "Request Body" body=""
	I1206 10:34:13.930807  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:13.931082  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:14.430699  522370 type.go:168] "Request Body" body=""
	I1206 10:34:14.430785  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:14.431147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:14.930773  522370 type.go:168] "Request Body" body=""
	I1206 10:34:14.930855  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:14.931210  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:15.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:34:15.430808  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:15.431058  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:15.431101  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:15.930737  522370 type.go:168] "Request Body" body=""
	I1206 10:34:15.930809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:15.931163  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:16.430877  522370 type.go:168] "Request Body" body=""
	I1206 10:34:16.430949  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:16.431309  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:16.930711  522370 type.go:168] "Request Body" body=""
	I1206 10:34:16.930788  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:16.931088  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:17.430798  522370 type.go:168] "Request Body" body=""
	I1206 10:34:17.430879  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:17.431230  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:17.431288  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:17.931511  522370 type.go:168] "Request Body" body=""
	I1206 10:34:17.931612  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:17.931976  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:18.431590  522370 type.go:168] "Request Body" body=""
	I1206 10:34:18.431659  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:18.432004  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:18.930728  522370 type.go:168] "Request Body" body=""
	I1206 10:34:18.930808  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:18.931147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:19.430863  522370 type.go:168] "Request Body" body=""
	I1206 10:34:19.430939  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:19.431293  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:19.431346  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:19.930992  522370 type.go:168] "Request Body" body=""
	I1206 10:34:19.931064  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:19.931410  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:20.430778  522370 type.go:168] "Request Body" body=""
	I1206 10:34:20.430854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:20.431219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:20.931558  522370 type.go:168] "Request Body" body=""
	I1206 10:34:20.931639  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:20.931987  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:21.430701  522370 type.go:168] "Request Body" body=""
	I1206 10:34:21.430786  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:21.431147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:21.930753  522370 type.go:168] "Request Body" body=""
	I1206 10:34:21.930827  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:21.931172  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:21.931232  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:22.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:34:22.430999  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:22.431346  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:22.931270  522370 type.go:168] "Request Body" body=""
	I1206 10:34:22.931368  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:22.931817  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:23.431585  522370 type.go:168] "Request Body" body=""
	I1206 10:34:23.431659  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:23.431973  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:23.930685  522370 type.go:168] "Request Body" body=""
	I1206 10:34:23.930759  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:23.931087  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:24.430797  522370 type.go:168] "Request Body" body=""
	I1206 10:34:24.430872  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:24.431117  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:24.431176  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:24.930806  522370 type.go:168] "Request Body" body=""
	I1206 10:34:24.930882  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:24.931202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:25.430788  522370 type.go:168] "Request Body" body=""
	I1206 10:34:25.430861  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:25.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:25.930868  522370 type.go:168] "Request Body" body=""
	I1206 10:34:25.930939  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:25.931218  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:26.430758  522370 type.go:168] "Request Body" body=""
	I1206 10:34:26.430834  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:26.431213  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:26.431274  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:26.930768  522370 type.go:168] "Request Body" body=""
	I1206 10:34:26.930845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:26.931192  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:27.430884  522370 type.go:168] "Request Body" body=""
	I1206 10:34:27.430960  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:27.431252  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:27.931325  522370 type.go:168] "Request Body" body=""
	I1206 10:34:27.931408  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:27.931744  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:28.431435  522370 type.go:168] "Request Body" body=""
	I1206 10:34:28.431523  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:28.431850  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:28.431909  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:28.931644  522370 type.go:168] "Request Body" body=""
	I1206 10:34:28.931714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:28.931970  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:29.430721  522370 type.go:168] "Request Body" body=""
	I1206 10:34:29.430803  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:29.431141  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:29.930784  522370 type.go:168] "Request Body" body=""
	I1206 10:34:29.930859  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:29.931176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:30.430844  522370 type.go:168] "Request Body" body=""
	I1206 10:34:30.430919  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:30.431210  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:30.930768  522370 type.go:168] "Request Body" body=""
	I1206 10:34:30.930851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:30.931235  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:30.931295  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:31.430810  522370 type.go:168] "Request Body" body=""
	I1206 10:34:31.430887  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:31.431198  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:31.930745  522370 type.go:168] "Request Body" body=""
	I1206 10:34:31.930813  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:31.931077  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:32.430753  522370 type.go:168] "Request Body" body=""
	I1206 10:34:32.430840  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:32.431195  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:32.931075  522370 type.go:168] "Request Body" body=""
	I1206 10:34:32.931167  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:32.931468  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:32.931518  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:33.431100  522370 type.go:168] "Request Body" body=""
	I1206 10:34:33.431184  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:33.431485  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:33.930775  522370 type.go:168] "Request Body" body=""
	I1206 10:34:33.930855  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:33.931221  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:34.430796  522370 type.go:168] "Request Body" body=""
	I1206 10:34:34.430877  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:34.431210  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:34.930739  522370 type.go:168] "Request Body" body=""
	I1206 10:34:34.930818  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:34.931162  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:35.430773  522370 type.go:168] "Request Body" body=""
	I1206 10:34:35.430856  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:35.431214  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:35.431268  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:35.930868  522370 type.go:168] "Request Body" body=""
	I1206 10:34:35.930944  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:35.931315  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:36.430720  522370 type.go:168] "Request Body" body=""
	I1206 10:34:36.430791  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:36.431040  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:36.930739  522370 type.go:168] "Request Body" body=""
	I1206 10:34:36.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:36.931195  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:37.430910  522370 type.go:168] "Request Body" body=""
	I1206 10:34:37.430986  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:37.431301  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:37.431348  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:37.931302  522370 type.go:168] "Request Body" body=""
	I1206 10:34:37.931371  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:37.931629  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:38.431530  522370 type.go:168] "Request Body" body=""
	I1206 10:34:38.431619  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:38.431930  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:38.930656  522370 type.go:168] "Request Body" body=""
	I1206 10:34:38.930736  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:38.931104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:39.430791  522370 type.go:168] "Request Body" body=""
	I1206 10:34:39.430869  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:39.431157  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:39.930904  522370 type.go:168] "Request Body" body=""
	I1206 10:34:39.930984  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:39.931350  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:39.931412  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:40.431091  522370 type.go:168] "Request Body" body=""
	I1206 10:34:40.431191  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:40.431534  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:40.931277  522370 type.go:168] "Request Body" body=""
	I1206 10:34:40.931349  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:40.931605  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:41.431406  522370 type.go:168] "Request Body" body=""
	I1206 10:34:41.431517  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:41.431838  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:41.931609  522370 type.go:168] "Request Body" body=""
	I1206 10:34:41.931696  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:41.932047  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:41.932102  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:42.430748  522370 type.go:168] "Request Body" body=""
	I1206 10:34:42.430824  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:42.431103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:42.931215  522370 type.go:168] "Request Body" body=""
	I1206 10:34:42.931317  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:42.931648  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:43.431450  522370 type.go:168] "Request Body" body=""
	I1206 10:34:43.431526  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:43.431858  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:43.931579  522370 type.go:168] "Request Body" body=""
	I1206 10:34:43.931659  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:43.931991  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:44.431656  522370 type.go:168] "Request Body" body=""
	I1206 10:34:44.431730  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:44.432129  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:44.432185  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:44.930734  522370 type.go:168] "Request Body" body=""
	I1206 10:34:44.930810  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:44.931202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:45.430889  522370 type.go:168] "Request Body" body=""
	I1206 10:34:45.430961  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:45.431255  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:45.930943  522370 type.go:168] "Request Body" body=""
	I1206 10:34:45.931026  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:45.931431  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:46.430744  522370 type.go:168] "Request Body" body=""
	I1206 10:34:46.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:46.431156  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:46.930832  522370 type.go:168] "Request Body" body=""
	I1206 10:34:46.930896  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:46.931177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:46.931219  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:47.430865  522370 type.go:168] "Request Body" body=""
	I1206 10:34:47.430941  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:47.431318  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:47.931392  522370 type.go:168] "Request Body" body=""
	I1206 10:34:47.931469  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:47.931802  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:48.431602  522370 type.go:168] "Request Body" body=""
	I1206 10:34:48.431696  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:48.432026  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:48.930775  522370 type.go:168] "Request Body" body=""
	I1206 10:34:48.930851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:48.931294  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:48.931353  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:49.431025  522370 type.go:168] "Request Body" body=""
	I1206 10:34:49.431108  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:49.431448  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:49.930724  522370 type.go:168] "Request Body" body=""
	I1206 10:34:49.930802  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:49.931116  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:50.430791  522370 type.go:168] "Request Body" body=""
	I1206 10:34:50.430867  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:50.431248  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:50.930784  522370 type.go:168] "Request Body" body=""
	I1206 10:34:50.930864  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:50.931205  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:51.430733  522370 type.go:168] "Request Body" body=""
	I1206 10:34:51.430811  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:51.431080  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:51.431150  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:51.930837  522370 type.go:168] "Request Body" body=""
	I1206 10:34:51.930930  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:51.931324  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:52.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:34:52.430851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:52.431202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:52.931267  522370 type.go:168] "Request Body" body=""
	I1206 10:34:52.931348  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:52.931664  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:53.431501  522370 type.go:168] "Request Body" body=""
	I1206 10:34:53.431595  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:53.431957  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:53.432013  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:53.931658  522370 type.go:168] "Request Body" body=""
	I1206 10:34:53.931738  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:53.932077  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:54.430749  522370 type.go:168] "Request Body" body=""
	I1206 10:34:54.430871  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:54.431247  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:54.930761  522370 type.go:168] "Request Body" body=""
	I1206 10:34:54.930837  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:54.931206  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:55.430922  522370 type.go:168] "Request Body" body=""
	I1206 10:34:55.431013  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:55.431352  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:55.930722  522370 type.go:168] "Request Body" body=""
	I1206 10:34:55.930788  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:55.931160  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:55.931217  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:56.430917  522370 type.go:168] "Request Body" body=""
	I1206 10:34:56.430995  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:56.431296  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:56.930987  522370 type.go:168] "Request Body" body=""
	I1206 10:34:56.931062  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:56.931423  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:57.430960  522370 type.go:168] "Request Body" body=""
	I1206 10:34:57.431029  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:57.431303  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:57.931550  522370 type.go:168] "Request Body" body=""
	I1206 10:34:57.931631  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:57.931966  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:57.932029  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:58.430730  522370 type.go:168] "Request Body" body=""
	I1206 10:34:58.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:58.431155  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:58.930843  522370 type.go:168] "Request Body" body=""
	I1206 10:34:58.930914  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:58.931207  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:59.430875  522370 type.go:168] "Request Body" body=""
	I1206 10:34:59.430950  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:59.431266  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:59.930814  522370 type.go:168] "Request Body" body=""
	I1206 10:34:59.930906  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:59.931260  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:00.430976  522370 type.go:168] "Request Body" body=""
	I1206 10:35:00.431061  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:00.431541  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:00.431605  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:00.931369  522370 type.go:168] "Request Body" body=""
	I1206 10:35:00.931476  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:00.931758  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:01.431561  522370 type.go:168] "Request Body" body=""
	I1206 10:35:01.431652  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:01.432065  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:01.930651  522370 type.go:168] "Request Body" body=""
	I1206 10:35:01.930724  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:01.930990  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:02.430729  522370 type.go:168] "Request Body" body=""
	I1206 10:35:02.430828  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:02.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:02.931011  522370 type.go:168] "Request Body" body=""
	I1206 10:35:02.931089  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:02.931442  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:02.931498  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:03.430733  522370 type.go:168] "Request Body" body=""
	I1206 10:35:03.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:03.431059  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:03.930760  522370 type.go:168] "Request Body" body=""
	I1206 10:35:03.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:03.931180  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:04.430888  522370 type.go:168] "Request Body" body=""
	I1206 10:35:04.430974  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:04.431297  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:04.930734  522370 type.go:168] "Request Body" body=""
	I1206 10:35:04.930812  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:04.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:05.430766  522370 type.go:168] "Request Body" body=""
	I1206 10:35:05.430845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:05.431226  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:05.431281  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:05.930825  522370 type.go:168] "Request Body" body=""
	I1206 10:35:05.930901  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:05.931256  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:06.430727  522370 type.go:168] "Request Body" body=""
	I1206 10:35:06.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:06.431148  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:06.930753  522370 type.go:168] "Request Body" body=""
	I1206 10:35:06.930834  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:06.931217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:07.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:35:07.430991  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:07.431345  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:07.431402  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:07.931619  522370 type.go:168] "Request Body" body=""
	I1206 10:35:07.931687  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:07.931937  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:08.430638  522370 type.go:168] "Request Body" body=""
	I1206 10:35:08.430708  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:08.431059  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:08.930771  522370 type.go:168] "Request Body" body=""
	I1206 10:35:08.930854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:08.931232  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:09.430960  522370 type.go:168] "Request Body" body=""
	I1206 10:35:09.431028  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:09.431338  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:09.930769  522370 type.go:168] "Request Body" body=""
	I1206 10:35:09.930843  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:09.931199  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:09.931252  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:10.430779  522370 type.go:168] "Request Body" body=""
	I1206 10:35:10.430854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:10.431226  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:10.930761  522370 type.go:168] "Request Body" body=""
	I1206 10:35:10.930829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:10.931111  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:11.430901  522370 type.go:168] "Request Body" body=""
	I1206 10:35:11.430975  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:11.431323  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:11.930769  522370 type.go:168] "Request Body" body=""
	I1206 10:35:11.930846  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:11.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:12.430718  522370 type.go:168] "Request Body" body=""
	I1206 10:35:12.430798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:12.431146  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:12.431211  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:12.931230  522370 type.go:168] "Request Body" body=""
	I1206 10:35:12.931308  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:12.931636  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:13.431462  522370 type.go:168] "Request Body" body=""
	I1206 10:35:13.431538  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:13.431885  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:13.931641  522370 type.go:168] "Request Body" body=""
	I1206 10:35:13.931714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:13.931987  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:14.430767  522370 type.go:168] "Request Body" body=""
	I1206 10:35:14.430841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:14.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:14.431257  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:14.930975  522370 type.go:168] "Request Body" body=""
	I1206 10:35:14.931053  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:14.931466  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:15.431217  522370 type.go:168] "Request Body" body=""
	I1206 10:35:15.431297  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:15.431580  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:15.931377  522370 type.go:168] "Request Body" body=""
	I1206 10:35:15.931454  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:15.931796  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:16.431484  522370 type.go:168] "Request Body" body=""
	I1206 10:35:16.431559  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:16.431888  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:16.431945  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:16.931644  522370 type.go:168] "Request Body" body=""
	I1206 10:35:16.931713  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:16.931977  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:17.430728  522370 type.go:168] "Request Body" body=""
	I1206 10:35:17.430856  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:17.431208  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:17.931466  522370 type.go:168] "Request Body" body=""
	I1206 10:35:17.931549  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:17.931886  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:18.431642  522370 type.go:168] "Request Body" body=""
	I1206 10:35:18.431714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:18.431964  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:18.432006  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:18.930687  522370 type.go:168] "Request Body" body=""
	I1206 10:35:18.930760  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:18.931117  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:19.430852  522370 type.go:168] "Request Body" body=""
	I1206 10:35:19.430938  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:19.431325  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:19.930751  522370 type.go:168] "Request Body" body=""
	I1206 10:35:19.930852  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:19.931255  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:20.430723  522370 type.go:168] "Request Body" body=""
	I1206 10:35:20.430804  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:20.431177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:20.930767  522370 type.go:168] "Request Body" body=""
	I1206 10:35:20.930845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:20.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:20.931244  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:21.430727  522370 type.go:168] "Request Body" body=""
	I1206 10:35:21.430804  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:21.431059  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:21.930732  522370 type.go:168] "Request Body" body=""
	I1206 10:35:21.930815  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:21.931186  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:22.430734  522370 type.go:168] "Request Body" body=""
	I1206 10:35:22.430810  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:22.431194  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:22.931191  522370 type.go:168] "Request Body" body=""
	I1206 10:35:22.931266  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:22.931524  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:22.931567  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:23.431346  522370 type.go:168] "Request Body" body=""
	I1206 10:35:23.431424  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:23.431932  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:23.930769  522370 type.go:168] "Request Body" body=""
	I1206 10:35:23.930843  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:23.931196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:24.430741  522370 type.go:168] "Request Body" body=""
	I1206 10:35:24.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:24.431074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:24.930743  522370 type.go:168] "Request Body" body=""
	I1206 10:35:24.930825  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:24.931196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:25.430898  522370 type.go:168] "Request Body" body=""
	I1206 10:35:25.430975  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:25.431343  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:25.431399  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:25.931031  522370 type.go:168] "Request Body" body=""
	I1206 10:35:25.931103  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:25.931404  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:26.430767  522370 type.go:168] "Request Body" body=""
	I1206 10:35:26.430843  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:26.431170  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:26.930780  522370 type.go:168] "Request Body" body=""
	I1206 10:35:26.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:26.931215  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:27.430765  522370 type.go:168] "Request Body" body=""
	I1206 10:35:27.430836  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:27.431109  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:27.931322  522370 type.go:168] "Request Body" body=""
	I1206 10:35:27.931408  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:27.931759  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:27.931820  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:28.430752  522370 type.go:168] "Request Body" body=""
	I1206 10:35:28.430847  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:28.431179  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:28.930742  522370 type.go:168] "Request Body" body=""
	I1206 10:35:28.930795  522370 node_ready.go:38] duration metric: took 6m0.000265171s for node "functional-123579" to be "Ready" ...
	I1206 10:35:28.934235  522370 out.go:203] 
	W1206 10:35:28.937230  522370 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 10:35:28.937255  522370 out.go:285] * 
	W1206 10:35:28.939411  522370 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:35:28.942269  522370 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.948799439Z" level=info msg="Using the internal default seccomp profile"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.948873136Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.948927305Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.948983969Z" level=info msg="RDT not available in the host system"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.949060069Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.95001273Z" level=info msg="Conmon does support the --sync option"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.950102393Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.950167188Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.950967097Z" level=info msg="Conmon does support the --sync option"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.951048383Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.9513514Z" level=info msg="Updated default CNI network name to "
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.9523931Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/
hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_me
mory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir
= \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [cri
o.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.952804193Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.952869734Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.988235454Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.988272007Z" level=info msg="Starting seccomp notifier watcher"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.988323395Z" level=info msg="Create NRI interface"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.988426186Z" level=info msg="built-in NRI default validator is disabled"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.988433989Z" level=info msg="runtime interface created"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.988446223Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.988452434Z" level=info msg="runtime interface starting up..."
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.988458941Z" level=info msg="starting plugins..."
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.988472553Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:29:25 functional-123579 crio[5369]: time="2025-12-06T10:29:25.988537683Z" level=info msg="No systemd watchdog enabled"
	Dec 06 10:29:25 functional-123579 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:35:33.545732    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:35:33.546559    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:35:33.548305    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:35:33.548634    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:35:33.550068    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:20] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000983] FS-Cache: O-cookie d=000000005fa08aa9{9P.session} n=00000000effdd306
	[  +0.001108] FS-Cache: O-key=[10] '34323935383339353739'
	[  +0.000774] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001064] FS-Cache: N-cookie d=000000005fa08aa9{9P.session} n=00000000d1a54e80
	[  +0.001158] FS-Cache: N-key=[10] '34323935383339353739'
	[Dec 6 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.091742] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:17] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:18] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:35:33 up  3:18,  0 user,  load average: 0.12, 0.27, 0.82
	Linux functional-123579 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:35:31 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:35:31 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1141.
	Dec 06 10:35:31 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:31 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:31 functional-123579 kubelet[8636]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:31 functional-123579 kubelet[8636]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:31 functional-123579 kubelet[8636]: E1206 10:35:31.995656    8636 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:35:31 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:35:31 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:35:32 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1142.
	Dec 06 10:35:32 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:32 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:32 functional-123579 kubelet[8672]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:32 functional-123579 kubelet[8672]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:32 functional-123579 kubelet[8672]: E1206 10:35:32.743239    8672 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:35:32 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:35:32 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:35:33 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1143.
	Dec 06 10:35:33 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:33 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:33 functional-123579 kubelet[8744]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:33 functional-123579 kubelet[8744]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:33 functional-123579 kubelet[8744]: E1206 10:35:33.494511    8744 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:35:33 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:35:33 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579: exit status 2 (366.289362ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-123579" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 kubectl -- --context functional-123579 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-123579 kubectl -- --context functional-123579 get pods: exit status 1 (129.693197ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-123579 kubectl -- --context functional-123579 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-123579
helpers_test.go:243: (dbg) docker inspect functional-123579:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	        "Created": "2025-12-06T10:21:05.490589445Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 516908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:21:05.573219423Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hostname",
	        "HostsPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hosts",
	        "LogPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721-json.log",
	        "Name": "/functional-123579",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-123579:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-123579",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	                "LowerDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f-init/diff:/var/lib/docker/overlay2/cc06c0f1f442a7275dc247974ca9074508813cfb842de89bc5bb1dae1e824222/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-123579",
	                "Source": "/var/lib/docker/volumes/functional-123579/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-123579",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-123579",
	                "name.minikube.sigs.k8s.io": "functional-123579",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10921d51d4ec866d78853297249318b04ef864639c8e07349985c5733ba03a26",
	            "SandboxKey": "/var/run/docker/netns/10921d51d4ec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-123579": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:5b:29:c4:a4:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa75a7cb7ddfb7086d66f629904d681a84e2c9da78725396c4dc859cfc5aa536",
	                    "EndpointID": "eff9632b5a6c335169f4a61b3c9f1727c30b30183ac61ac9730ddb7b0d19cf24",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-123579",
	                        "86e8d3865f80"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579: exit status 2 (327.370199ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-123579 logs -n 25: (1.144441773s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-137526 image ls --format short --alsologtostderr                                                                                       │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image   │ functional-137526 image ls --format yaml --alsologtostderr                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ ssh     │ functional-137526 ssh pgrep buildkitd                                                                                                             │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ image   │ functional-137526 image build -t localhost/my-image:functional-137526 testdata/build --alsologtostderr                                            │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image   │ functional-137526 image ls --format json --alsologtostderr                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image   │ functional-137526 image ls --format table --alsologtostderr                                                                                       │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image   │ functional-137526 image ls                                                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ delete  │ -p functional-137526                                                                                                                              │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:21 UTC │
	│ start   │ -p functional-123579 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:21 UTC │                     │
	│ start   │ -p functional-123579 --alsologtostderr -v=8                                                                                                       │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:29 UTC │                     │
	│ cache   │ functional-123579 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ functional-123579 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ functional-123579 cache add registry.k8s.io/pause:latest                                                                                          │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ functional-123579 cache add minikube-local-cache-test:functional-123579                                                                           │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ functional-123579 cache delete minikube-local-cache-test:functional-123579                                                                        │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ ssh     │ functional-123579 ssh sudo crictl images                                                                                                          │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ ssh     │ functional-123579 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ ssh     │ functional-123579 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │                     │
	│ cache   │ functional-123579 cache reload                                                                                                                    │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ ssh     │ functional-123579 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ kubectl │ functional-123579 kubectl -- --context functional-123579 get pods                                                                                 │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:29:22
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:29:22.870980  522370 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:29:22.871170  522370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:22.871181  522370 out.go:374] Setting ErrFile to fd 2...
	I1206 10:29:22.871187  522370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:22.871464  522370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:29:22.871865  522370 out.go:368] Setting JSON to false
	I1206 10:29:22.872761  522370 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11514,"bootTime":1765005449,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:29:22.872829  522370 start.go:143] virtualization:  
	I1206 10:29:22.876360  522370 out.go:179] * [functional-123579] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:29:22.880135  522370 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:29:22.880243  522370 notify.go:221] Checking for updates...
	I1206 10:29:22.885979  522370 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:29:22.888900  522370 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:22.891673  522370 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:29:22.894419  522370 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:29:22.897199  522370 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:29:22.900505  522370 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:29:22.900663  522370 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:29:22.930035  522370 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:29:22.930154  522370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:29:22.994169  522370 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:29:22.985097483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:29:22.994270  522370 docker.go:319] overlay module found
	I1206 10:29:22.997336  522370 out.go:179] * Using the docker driver based on existing profile
	I1206 10:29:23.000134  522370 start.go:309] selected driver: docker
	I1206 10:29:23.000177  522370 start.go:927] validating driver "docker" against &{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:29:23.000290  522370 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:29:23.000407  522370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:29:23.064912  522370 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:29:23.055716934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:29:23.065339  522370 cni.go:84] Creating CNI manager for ""
	I1206 10:29:23.065406  522370 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:29:23.065455  522370 start.go:353] cluster config:
	{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:29:23.068684  522370 out.go:179] * Starting "functional-123579" primary control-plane node in "functional-123579" cluster
	I1206 10:29:23.071544  522370 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:29:23.074549  522370 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:29:23.077588  522370 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:29:23.077638  522370 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:29:23.077648  522370 cache.go:65] Caching tarball of preloaded images
	I1206 10:29:23.077715  522370 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:29:23.077742  522370 preload.go:238] Found /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:29:23.077753  522370 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:29:23.077861  522370 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/config.json ...
	I1206 10:29:23.100973  522370 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:29:23.100996  522370 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:29:23.101011  522370 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:29:23.101047  522370 start.go:360] acquireMachinesLock for functional-123579: {Name:mk35a9adf20f50a3c49b774a4ee092917f16cc66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:29:23.101106  522370 start.go:364] duration metric: took 36.569µs to acquireMachinesLock for "functional-123579"
	I1206 10:29:23.101131  522370 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:29:23.101140  522370 fix.go:54] fixHost starting: 
	I1206 10:29:23.101403  522370 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:29:23.120661  522370 fix.go:112] recreateIfNeeded on functional-123579: state=Running err=<nil>
	W1206 10:29:23.120697  522370 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:29:23.124123  522370 out.go:252] * Updating the running docker "functional-123579" container ...
	I1206 10:29:23.124169  522370 machine.go:94] provisionDockerMachine start ...
	I1206 10:29:23.124278  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:23.148209  522370 main.go:143] libmachine: Using SSH client type: native
	I1206 10:29:23.148655  522370 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:29:23.148670  522370 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:29:23.311217  522370 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:29:23.311246  522370 ubuntu.go:182] provisioning hostname "functional-123579"
	I1206 10:29:23.311337  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:23.330615  522370 main.go:143] libmachine: Using SSH client type: native
	I1206 10:29:23.330948  522370 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:29:23.330967  522370 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-123579 && echo "functional-123579" | sudo tee /etc/hostname
	I1206 10:29:23.492326  522370 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:29:23.492442  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:23.511425  522370 main.go:143] libmachine: Using SSH client type: native
	I1206 10:29:23.511745  522370 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:29:23.511767  522370 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-123579' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-123579/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-123579' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:29:23.663802  522370 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:29:23.663828  522370 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-484819/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-484819/.minikube}
	I1206 10:29:23.663852  522370 ubuntu.go:190] setting up certificates
	I1206 10:29:23.663862  522370 provision.go:84] configureAuth start
	I1206 10:29:23.663938  522370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:29:23.683626  522370 provision.go:143] copyHostCerts
	I1206 10:29:23.683677  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 10:29:23.683720  522370 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem, removing ...
	I1206 10:29:23.683732  522370 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 10:29:23.683811  522370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem (1082 bytes)
	I1206 10:29:23.683905  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 10:29:23.683927  522370 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem, removing ...
	I1206 10:29:23.683935  522370 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 10:29:23.683965  522370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem (1123 bytes)
	I1206 10:29:23.684012  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 10:29:23.684032  522370 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem, removing ...
	I1206 10:29:23.684040  522370 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 10:29:23.684065  522370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem (1675 bytes)
	I1206 10:29:23.684117  522370 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem org=jenkins.functional-123579 san=[127.0.0.1 192.168.49.2 functional-123579 localhost minikube]
	I1206 10:29:23.851072  522370 provision.go:177] copyRemoteCerts
	I1206 10:29:23.851167  522370 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:29:23.851208  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:23.869258  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:23.976487  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 10:29:23.976551  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:29:23.994935  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 10:29:23.995001  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:29:24.028988  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 10:29:24.029065  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:29:24.047435  522370 provision.go:87] duration metric: took 383.548866ms to configureAuth
	I1206 10:29:24.047460  522370 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:29:24.047651  522370 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:29:24.047753  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.065906  522370 main.go:143] libmachine: Using SSH client type: native
	I1206 10:29:24.066279  522370 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:29:24.066304  522370 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:29:24.394899  522370 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:29:24.394922  522370 machine.go:97] duration metric: took 1.270744832s to provisionDockerMachine
	I1206 10:29:24.394933  522370 start.go:293] postStartSetup for "functional-123579" (driver="docker")
	I1206 10:29:24.394946  522370 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:29:24.395040  522370 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:29:24.395089  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.413037  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:24.518950  522370 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:29:24.522167  522370 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1206 10:29:24.522190  522370 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1206 10:29:24.522196  522370 command_runner.go:130] > VERSION_ID="12"
	I1206 10:29:24.522201  522370 command_runner.go:130] > VERSION="12 (bookworm)"
	I1206 10:29:24.522206  522370 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1206 10:29:24.522219  522370 command_runner.go:130] > ID=debian
	I1206 10:29:24.522224  522370 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1206 10:29:24.522228  522370 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1206 10:29:24.522234  522370 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1206 10:29:24.522273  522370 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:29:24.522296  522370 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:29:24.522307  522370 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/addons for local assets ...
	I1206 10:29:24.522366  522370 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/files for local assets ...
	I1206 10:29:24.522448  522370 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> 4880682.pem in /etc/ssl/certs
	I1206 10:29:24.522465  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> /etc/ssl/certs/4880682.pem
	I1206 10:29:24.522539  522370 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts -> hosts in /etc/test/nested/copy/488068
	I1206 10:29:24.522547  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts -> /etc/test/nested/copy/488068/hosts
	I1206 10:29:24.522590  522370 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488068
	I1206 10:29:24.529941  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:29:24.547406  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts --> /etc/test/nested/copy/488068/hosts (40 bytes)
	I1206 10:29:24.564885  522370 start.go:296] duration metric: took 169.937214ms for postStartSetup
	I1206 10:29:24.565009  522370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:29:24.565071  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.582051  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:24.684564  522370 command_runner.go:130] > 18%
	I1206 10:29:24.685308  522370 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:29:24.690194  522370 command_runner.go:130] > 161G
	I1206 10:29:24.690863  522370 fix.go:56] duration metric: took 1.589719046s for fixHost
	I1206 10:29:24.690882  522370 start.go:83] releasing machines lock for "functional-123579", held for 1.589762361s
	I1206 10:29:24.690959  522370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:29:24.710139  522370 ssh_runner.go:195] Run: cat /version.json
	I1206 10:29:24.710198  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.710437  522370 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:29:24.710491  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.744752  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:24.750995  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:24.850618  522370 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1206 10:29:24.850833  522370 ssh_runner.go:195] Run: systemctl --version
	I1206 10:29:24.941044  522370 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 10:29:24.943691  522370 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1206 10:29:24.943731  522370 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 10:29:24.943796  522370 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:29:24.982406  522370 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 10:29:24.986710  522370 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 10:29:24.986856  522370 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:29:24.986921  522370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:29:24.995206  522370 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:29:24.995230  522370 start.go:496] detecting cgroup driver to use...
	I1206 10:29:24.995260  522370 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:29:24.995314  522370 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:29:25.015488  522370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:29:25.029388  522370 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:29:25.029474  522370 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:29:25.044588  522370 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:29:25.057886  522370 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:29:25.175907  522370 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:29:25.297406  522370 docker.go:234] disabling docker service ...
	I1206 10:29:25.297502  522370 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:29:25.313940  522370 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:29:25.326948  522370 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:29:25.448237  522370 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:29:25.592886  522370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:29:25.605716  522370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:29:25.618765  522370 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1206 10:29:25.620045  522370 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:29:25.620120  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.628683  522370 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:29:25.628808  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.637855  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.646676  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.656251  522370 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:29:25.664395  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.673385  522370 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.681859  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.691317  522370 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:29:25.697883  522370 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 10:29:25.698954  522370 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:29:25.706470  522370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:29:25.835287  522370 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:29:25.994073  522370 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:29:25.994183  522370 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:29:25.998083  522370 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1206 10:29:25.998204  522370 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 10:29:25.998238  522370 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1206 10:29:25.998335  522370 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:29:25.998358  522370 command_runner.go:130] > Access: 2025-12-06 10:29:25.948140155 +0000
	I1206 10:29:25.998390  522370 command_runner.go:130] > Modify: 2025-12-06 10:29:25.948140155 +0000
	I1206 10:29:25.998420  522370 command_runner.go:130] > Change: 2025-12-06 10:29:25.948140155 +0000
	I1206 10:29:25.998437  522370 command_runner.go:130] >  Birth: -
	I1206 10:29:25.998473  522370 start.go:564] Will wait 60s for crictl version
	I1206 10:29:25.998553  522370 ssh_runner.go:195] Run: which crictl
	I1206 10:29:26.004847  522370 command_runner.go:130] > /usr/local/bin/crictl
	I1206 10:29:26.004981  522370 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:29:26.037391  522370 command_runner.go:130] > Version:  0.1.0
	I1206 10:29:26.037414  522370 command_runner.go:130] > RuntimeName:  cri-o
	I1206 10:29:26.037421  522370 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1206 10:29:26.037427  522370 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 10:29:26.037438  522370 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:29:26.037548  522370 ssh_runner.go:195] Run: crio --version
	I1206 10:29:26.065733  522370 command_runner.go:130] > crio version 1.34.3
	I1206 10:29:26.065769  522370 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1206 10:29:26.065793  522370 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1206 10:29:26.065805  522370 command_runner.go:130] >    GitTreeState:   dirty
	I1206 10:29:26.065811  522370 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1206 10:29:26.065822  522370 command_runner.go:130] >    GoVersion:      go1.24.6
	I1206 10:29:26.065827  522370 command_runner.go:130] >    Compiler:       gc
	I1206 10:29:26.065832  522370 command_runner.go:130] >    Platform:       linux/arm64
	I1206 10:29:26.065840  522370 command_runner.go:130] >    Linkmode:       static
	I1206 10:29:26.065845  522370 command_runner.go:130] >    BuildTags:
	I1206 10:29:26.065852  522370 command_runner.go:130] >      static
	I1206 10:29:26.065886  522370 command_runner.go:130] >      netgo
	I1206 10:29:26.065897  522370 command_runner.go:130] >      osusergo
	I1206 10:29:26.065918  522370 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1206 10:29:26.065928  522370 command_runner.go:130] >      seccomp
	I1206 10:29:26.065932  522370 command_runner.go:130] >      apparmor
	I1206 10:29:26.065941  522370 command_runner.go:130] >      selinux
	I1206 10:29:26.065946  522370 command_runner.go:130] >    LDFlags:          unknown
	I1206 10:29:26.065954  522370 command_runner.go:130] >    SeccompEnabled:   true
	I1206 10:29:26.065958  522370 command_runner.go:130] >    AppArmorEnabled:  false
	I1206 10:29:26.068082  522370 ssh_runner.go:195] Run: crio --version
	I1206 10:29:26.095375  522370 command_runner.go:130] > crio version 1.34.3
	I1206 10:29:26.095453  522370 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1206 10:29:26.095474  522370 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1206 10:29:26.095491  522370 command_runner.go:130] >    GitTreeState:   dirty
	I1206 10:29:26.095522  522370 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1206 10:29:26.095561  522370 command_runner.go:130] >    GoVersion:      go1.24.6
	I1206 10:29:26.095582  522370 command_runner.go:130] >    Compiler:       gc
	I1206 10:29:26.095622  522370 command_runner.go:130] >    Platform:       linux/arm64
	I1206 10:29:26.095651  522370 command_runner.go:130] >    Linkmode:       static
	I1206 10:29:26.095669  522370 command_runner.go:130] >    BuildTags:
	I1206 10:29:26.095698  522370 command_runner.go:130] >      static
	I1206 10:29:26.095717  522370 command_runner.go:130] >      netgo
	I1206 10:29:26.095735  522370 command_runner.go:130] >      osusergo
	I1206 10:29:26.095756  522370 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1206 10:29:26.095787  522370 command_runner.go:130] >      seccomp
	I1206 10:29:26.095810  522370 command_runner.go:130] >      apparmor
	I1206 10:29:26.095867  522370 command_runner.go:130] >      selinux
	I1206 10:29:26.095888  522370 command_runner.go:130] >    LDFlags:          unknown
	I1206 10:29:26.095910  522370 command_runner.go:130] >    SeccompEnabled:   true
	I1206 10:29:26.095930  522370 command_runner.go:130] >    AppArmorEnabled:  false
	I1206 10:29:26.103062  522370 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:29:26.105990  522370 cli_runner.go:164] Run: docker network inspect functional-123579 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:29:26.122102  522370 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:29:26.125939  522370 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1206 10:29:26.126304  522370 kubeadm.go:884] updating cluster {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:29:26.126416  522370 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:29:26.126475  522370 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:29:26.161627  522370 command_runner.go:130] > {
	I1206 10:29:26.161646  522370 command_runner.go:130] >   "images":  [
	I1206 10:29:26.161650  522370 command_runner.go:130] >     {
	I1206 10:29:26.161662  522370 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:29:26.161666  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161672  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:29:26.161676  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161681  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161689  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1206 10:29:26.161697  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1206 10:29:26.161702  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161707  522370 command_runner.go:130] >       "size":  "111333938",
	I1206 10:29:26.161711  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.161719  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.161729  522370 command_runner.go:130] >     },
	I1206 10:29:26.161732  522370 command_runner.go:130] >     {
	I1206 10:29:26.161739  522370 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:29:26.161743  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161748  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:29:26.161751  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161757  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161765  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1206 10:29:26.161774  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:29:26.161777  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161781  522370 command_runner.go:130] >       "size":  "29037500",
	I1206 10:29:26.161785  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.161792  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.161795  522370 command_runner.go:130] >     },
	I1206 10:29:26.161799  522370 command_runner.go:130] >     {
	I1206 10:29:26.161805  522370 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:29:26.161810  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161815  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:29:26.161818  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161822  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161830  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1206 10:29:26.161838  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1206 10:29:26.161843  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161847  522370 command_runner.go:130] >       "size":  "74491780",
	I1206 10:29:26.161851  522370 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:29:26.161856  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.161859  522370 command_runner.go:130] >     },
	I1206 10:29:26.161863  522370 command_runner.go:130] >     {
	I1206 10:29:26.161869  522370 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:29:26.161873  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161878  522370 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:29:26.161883  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161887  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161898  522370 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1206 10:29:26.161905  522370 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1206 10:29:26.161908  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161912  522370 command_runner.go:130] >       "size":  "60857170",
	I1206 10:29:26.161916  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.161920  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.161923  522370 command_runner.go:130] >       },
	I1206 10:29:26.161935  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.161939  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.161942  522370 command_runner.go:130] >     },
	I1206 10:29:26.161946  522370 command_runner.go:130] >     {
	I1206 10:29:26.161953  522370 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:29:26.161956  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161963  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:29:26.161966  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161970  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161978  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1206 10:29:26.161986  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1206 10:29:26.161990  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161994  522370 command_runner.go:130] >       "size":  "84949999",
	I1206 10:29:26.161997  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.162001  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.162004  522370 command_runner.go:130] >       },
	I1206 10:29:26.162008  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162011  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.162014  522370 command_runner.go:130] >     },
	I1206 10:29:26.162018  522370 command_runner.go:130] >     {
	I1206 10:29:26.162024  522370 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:29:26.162028  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.162033  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:29:26.162037  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162041  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.162050  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1206 10:29:26.162067  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1206 10:29:26.162071  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162075  522370 command_runner.go:130] >       "size":  "72170325",
	I1206 10:29:26.162081  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.162091  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.162094  522370 command_runner.go:130] >       },
	I1206 10:29:26.162098  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162102  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.162105  522370 command_runner.go:130] >     },
	I1206 10:29:26.162115  522370 command_runner.go:130] >     {
	I1206 10:29:26.162123  522370 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:29:26.162128  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.162134  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:29:26.162137  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162143  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.162154  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1206 10:29:26.162163  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:29:26.162166  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162170  522370 command_runner.go:130] >       "size":  "74106775",
	I1206 10:29:26.162173  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162178  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.162181  522370 command_runner.go:130] >     },
	I1206 10:29:26.162184  522370 command_runner.go:130] >     {
	I1206 10:29:26.162191  522370 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:29:26.162194  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.162200  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:29:26.162203  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162207  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.162215  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1206 10:29:26.162232  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1206 10:29:26.162235  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162239  522370 command_runner.go:130] >       "size":  "49822549",
	I1206 10:29:26.162243  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.162250  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.162253  522370 command_runner.go:130] >       },
	I1206 10:29:26.162257  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162260  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.162263  522370 command_runner.go:130] >     },
	I1206 10:29:26.162267  522370 command_runner.go:130] >     {
	I1206 10:29:26.162273  522370 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:29:26.162277  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.162281  522370 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:29:26.162284  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162288  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.162296  522370 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1206 10:29:26.162304  522370 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1206 10:29:26.162307  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162311  522370 command_runner.go:130] >       "size":  "519884",
	I1206 10:29:26.162315  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.162318  522370 command_runner.go:130] >         "value":  "65535"
	I1206 10:29:26.162321  522370 command_runner.go:130] >       },
	I1206 10:29:26.162325  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162329  522370 command_runner.go:130] >       "pinned":  true
	I1206 10:29:26.162333  522370 command_runner.go:130] >     }
	I1206 10:29:26.162336  522370 command_runner.go:130] >   ]
	I1206 10:29:26.162339  522370 command_runner.go:130] > }
	I1206 10:29:26.164653  522370 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:29:26.164677  522370 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:29:26.164733  522370 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:29:26.190066  522370 command_runner.go:130] > {
	I1206 10:29:26.190096  522370 command_runner.go:130] >   "images":  [
	I1206 10:29:26.190102  522370 command_runner.go:130] >     {
	I1206 10:29:26.190111  522370 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:29:26.190116  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190122  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:29:26.190126  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190130  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190139  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1206 10:29:26.190147  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1206 10:29:26.190155  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190160  522370 command_runner.go:130] >       "size":  "111333938",
	I1206 10:29:26.190164  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190168  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190171  522370 command_runner.go:130] >     },
	I1206 10:29:26.190174  522370 command_runner.go:130] >     {
	I1206 10:29:26.190181  522370 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:29:26.190184  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190189  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:29:26.190193  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190197  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190205  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1206 10:29:26.190213  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:29:26.190216  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190220  522370 command_runner.go:130] >       "size":  "29037500",
	I1206 10:29:26.190224  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190229  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190232  522370 command_runner.go:130] >     },
	I1206 10:29:26.190235  522370 command_runner.go:130] >     {
	I1206 10:29:26.190241  522370 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:29:26.190245  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190250  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:29:26.190254  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190257  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190265  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1206 10:29:26.190273  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1206 10:29:26.190277  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190281  522370 command_runner.go:130] >       "size":  "74491780",
	I1206 10:29:26.190285  522370 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:29:26.190289  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190292  522370 command_runner.go:130] >     },
	I1206 10:29:26.190295  522370 command_runner.go:130] >     {
	I1206 10:29:26.190301  522370 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:29:26.190308  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190313  522370 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:29:26.190317  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190322  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190329  522370 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1206 10:29:26.190336  522370 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1206 10:29:26.190339  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190343  522370 command_runner.go:130] >       "size":  "60857170",
	I1206 10:29:26.190346  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190350  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.190353  522370 command_runner.go:130] >       },
	I1206 10:29:26.190364  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190369  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190372  522370 command_runner.go:130] >     },
	I1206 10:29:26.190374  522370 command_runner.go:130] >     {
	I1206 10:29:26.190381  522370 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:29:26.190384  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190389  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:29:26.190392  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190396  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190403  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1206 10:29:26.190412  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1206 10:29:26.190415  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190419  522370 command_runner.go:130] >       "size":  "84949999",
	I1206 10:29:26.190422  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190425  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.190428  522370 command_runner.go:130] >       },
	I1206 10:29:26.190432  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190436  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190439  522370 command_runner.go:130] >     },
	I1206 10:29:26.190441  522370 command_runner.go:130] >     {
	I1206 10:29:26.190448  522370 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:29:26.190452  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190460  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:29:26.190464  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190467  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190476  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1206 10:29:26.190484  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1206 10:29:26.190486  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190490  522370 command_runner.go:130] >       "size":  "72170325",
	I1206 10:29:26.190493  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190497  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.190500  522370 command_runner.go:130] >       },
	I1206 10:29:26.190504  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190507  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190514  522370 command_runner.go:130] >     },
	I1206 10:29:26.190517  522370 command_runner.go:130] >     {
	I1206 10:29:26.190524  522370 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:29:26.190528  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190533  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:29:26.190536  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190540  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190547  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1206 10:29:26.190554  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:29:26.190557  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190561  522370 command_runner.go:130] >       "size":  "74106775",
	I1206 10:29:26.190565  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190569  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190572  522370 command_runner.go:130] >     },
	I1206 10:29:26.190574  522370 command_runner.go:130] >     {
	I1206 10:29:26.190581  522370 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:29:26.190584  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190590  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:29:26.190593  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190597  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190604  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1206 10:29:26.190628  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1206 10:29:26.190632  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190636  522370 command_runner.go:130] >       "size":  "49822549",
	I1206 10:29:26.190639  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190643  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.190646  522370 command_runner.go:130] >       },
	I1206 10:29:26.190650  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190653  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190656  522370 command_runner.go:130] >     },
	I1206 10:29:26.190659  522370 command_runner.go:130] >     {
	I1206 10:29:26.190665  522370 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:29:26.190669  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190673  522370 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:29:26.190676  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190680  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190687  522370 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1206 10:29:26.190694  522370 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1206 10:29:26.190697  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190701  522370 command_runner.go:130] >       "size":  "519884",
	I1206 10:29:26.190705  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190709  522370 command_runner.go:130] >         "value":  "65535"
	I1206 10:29:26.190712  522370 command_runner.go:130] >       },
	I1206 10:29:26.190716  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190719  522370 command_runner.go:130] >       "pinned":  true
	I1206 10:29:26.190722  522370 command_runner.go:130] >     }
	I1206 10:29:26.190724  522370 command_runner.go:130] >   ]
	I1206 10:29:26.190728  522370 command_runner.go:130] > }
	I1206 10:29:26.192099  522370 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:29:26.192121  522370 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:29:26.192130  522370 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:29:26.192245  522370 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-123579 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:29:26.192338  522370 ssh_runner.go:195] Run: crio config
	I1206 10:29:26.220366  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.219989922Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1206 10:29:26.220411  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.220176363Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1206 10:29:26.220654  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.22050187Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1206 10:29:26.220871  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.220715248Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1206 10:29:26.221165  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.22098899Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:26.221621  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.221432459Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1206 10:29:26.238478  522370 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1206 10:29:26.263608  522370 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1206 10:29:26.263638  522370 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1206 10:29:26.263647  522370 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1206 10:29:26.263651  522370 command_runner.go:130] > #
	I1206 10:29:26.263687  522370 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1206 10:29:26.263707  522370 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1206 10:29:26.263714  522370 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1206 10:29:26.263721  522370 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1206 10:29:26.263726  522370 command_runner.go:130] > # reload'.
	I1206 10:29:26.263732  522370 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1206 10:29:26.263756  522370 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1206 10:29:26.263778  522370 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1206 10:29:26.263789  522370 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1206 10:29:26.263793  522370 command_runner.go:130] > [crio]
	I1206 10:29:26.263802  522370 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1206 10:29:26.263811  522370 command_runner.go:130] > # containers images, in this directory.
	I1206 10:29:26.263826  522370 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1206 10:29:26.263848  522370 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1206 10:29:26.263868  522370 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1206 10:29:26.263877  522370 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1206 10:29:26.263885  522370 command_runner.go:130] > # imagestore = ""
	I1206 10:29:26.263894  522370 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1206 10:29:26.263901  522370 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1206 10:29:26.263908  522370 command_runner.go:130] > # storage_driver = "overlay"
	I1206 10:29:26.263914  522370 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1206 10:29:26.263920  522370 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1206 10:29:26.263936  522370 command_runner.go:130] > # storage_option = [
	I1206 10:29:26.263952  522370 command_runner.go:130] > # ]
	I1206 10:29:26.263965  522370 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1206 10:29:26.263972  522370 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1206 10:29:26.263985  522370 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1206 10:29:26.263995  522370 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1206 10:29:26.264002  522370 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1206 10:29:26.264006  522370 command_runner.go:130] > # always happen on a node reboot
	I1206 10:29:26.264013  522370 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1206 10:29:26.264036  522370 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1206 10:29:26.264050  522370 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1206 10:29:26.264055  522370 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1206 10:29:26.264060  522370 command_runner.go:130] > # version_file_persist = ""
	I1206 10:29:26.264078  522370 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1206 10:29:26.264092  522370 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1206 10:29:26.264096  522370 command_runner.go:130] > # internal_wipe = true
	I1206 10:29:26.264105  522370 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1206 10:29:26.264113  522370 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1206 10:29:26.264117  522370 command_runner.go:130] > # internal_repair = true
	I1206 10:29:26.264124  522370 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1206 10:29:26.264131  522370 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1206 10:29:26.264150  522370 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1206 10:29:26.264171  522370 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1206 10:29:26.264181  522370 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1206 10:29:26.264188  522370 command_runner.go:130] > [crio.api]
	I1206 10:29:26.264194  522370 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1206 10:29:26.264202  522370 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1206 10:29:26.264208  522370 command_runner.go:130] > # IP address on which the stream server will listen.
	I1206 10:29:26.264214  522370 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1206 10:29:26.264221  522370 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1206 10:29:26.264226  522370 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1206 10:29:26.264241  522370 command_runner.go:130] > # stream_port = "0"
	I1206 10:29:26.264256  522370 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1206 10:29:26.264261  522370 command_runner.go:130] > # stream_enable_tls = false
	I1206 10:29:26.264279  522370 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1206 10:29:26.264295  522370 command_runner.go:130] > # stream_idle_timeout = ""
	I1206 10:29:26.264302  522370 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1206 10:29:26.264317  522370 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1206 10:29:26.264326  522370 command_runner.go:130] > # stream_tls_cert = ""
	I1206 10:29:26.264332  522370 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1206 10:29:26.264338  522370 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1206 10:29:26.264355  522370 command_runner.go:130] > # stream_tls_key = ""
	I1206 10:29:26.264373  522370 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1206 10:29:26.264389  522370 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1206 10:29:26.264395  522370 command_runner.go:130] > # automatically pick up the changes.
	I1206 10:29:26.264399  522370 command_runner.go:130] > # stream_tls_ca = ""
	I1206 10:29:26.264435  522370 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1206 10:29:26.264448  522370 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1206 10:29:26.264456  522370 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1206 10:29:26.264460  522370 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1206 10:29:26.264467  522370 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1206 10:29:26.264476  522370 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1206 10:29:26.264479  522370 command_runner.go:130] > [crio.runtime]
	I1206 10:29:26.264489  522370 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1206 10:29:26.264495  522370 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1206 10:29:26.264506  522370 command_runner.go:130] > # "nofile=1024:2048"
	I1206 10:29:26.264513  522370 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1206 10:29:26.264524  522370 command_runner.go:130] > # default_ulimits = [
	I1206 10:29:26.264527  522370 command_runner.go:130] > # ]
	I1206 10:29:26.264534  522370 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1206 10:29:26.264543  522370 command_runner.go:130] > # no_pivot = false
	I1206 10:29:26.264549  522370 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1206 10:29:26.264555  522370 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1206 10:29:26.264561  522370 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1206 10:29:26.264569  522370 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1206 10:29:26.264576  522370 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1206 10:29:26.264584  522370 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 10:29:26.264591  522370 command_runner.go:130] > # conmon = ""
	I1206 10:29:26.264595  522370 command_runner.go:130] > # Cgroup setting for conmon
	I1206 10:29:26.264602  522370 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1206 10:29:26.264612  522370 command_runner.go:130] > conmon_cgroup = "pod"
	I1206 10:29:26.264623  522370 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1206 10:29:26.264629  522370 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1206 10:29:26.264643  522370 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 10:29:26.264647  522370 command_runner.go:130] > # conmon_env = [
	I1206 10:29:26.264650  522370 command_runner.go:130] > # ]
	I1206 10:29:26.264655  522370 command_runner.go:130] > # Additional environment variables to set for all the
	I1206 10:29:26.264660  522370 command_runner.go:130] > # containers. These are overridden if set in the
	I1206 10:29:26.264668  522370 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1206 10:29:26.264674  522370 command_runner.go:130] > # default_env = [
	I1206 10:29:26.264677  522370 command_runner.go:130] > # ]
	I1206 10:29:26.264683  522370 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1206 10:29:26.264699  522370 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1206 10:29:26.264703  522370 command_runner.go:130] > # selinux = false
	I1206 10:29:26.264710  522370 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1206 10:29:26.264720  522370 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1206 10:29:26.264729  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.264734  522370 command_runner.go:130] > # seccomp_profile = ""
	I1206 10:29:26.264740  522370 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1206 10:29:26.264745  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.264751  522370 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1206 10:29:26.264759  522370 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1206 10:29:26.264767  522370 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1206 10:29:26.264774  522370 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1206 10:29:26.264789  522370 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1206 10:29:26.264794  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.264799  522370 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1206 10:29:26.264807  522370 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1206 10:29:26.264817  522370 command_runner.go:130] > # the cgroup blockio controller.
	I1206 10:29:26.264821  522370 command_runner.go:130] > # blockio_config_file = ""
	I1206 10:29:26.264828  522370 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1206 10:29:26.264834  522370 command_runner.go:130] > # blockio parameters.
	I1206 10:29:26.264838  522370 command_runner.go:130] > # blockio_reload = false
	I1206 10:29:26.264849  522370 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1206 10:29:26.264856  522370 command_runner.go:130] > # irqbalance daemon.
	I1206 10:29:26.264862  522370 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1206 10:29:26.264868  522370 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1206 10:29:26.264877  522370 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1206 10:29:26.264889  522370 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1206 10:29:26.264897  522370 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1206 10:29:26.264904  522370 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1206 10:29:26.264910  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.264917  522370 command_runner.go:130] > # rdt_config_file = ""
	I1206 10:29:26.264922  522370 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1206 10:29:26.264926  522370 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1206 10:29:26.264932  522370 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1206 10:29:26.264936  522370 command_runner.go:130] > # separate_pull_cgroup = ""
	I1206 10:29:26.264946  522370 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1206 10:29:26.264954  522370 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1206 10:29:26.264958  522370 command_runner.go:130] > # will be added.
	I1206 10:29:26.264966  522370 command_runner.go:130] > # default_capabilities = [
	I1206 10:29:26.264970  522370 command_runner.go:130] > # 	"CHOWN",
	I1206 10:29:26.264974  522370 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1206 10:29:26.264986  522370 command_runner.go:130] > # 	"FSETID",
	I1206 10:29:26.264990  522370 command_runner.go:130] > # 	"FOWNER",
	I1206 10:29:26.264993  522370 command_runner.go:130] > # 	"SETGID",
	I1206 10:29:26.264996  522370 command_runner.go:130] > # 	"SETUID",
	I1206 10:29:26.265019  522370 command_runner.go:130] > # 	"SETPCAP",
	I1206 10:29:26.265029  522370 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1206 10:29:26.265035  522370 command_runner.go:130] > # 	"KILL",
	I1206 10:29:26.265038  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265046  522370 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1206 10:29:26.265056  522370 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1206 10:29:26.265061  522370 command_runner.go:130] > # add_inheritable_capabilities = false
	I1206 10:29:26.265069  522370 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1206 10:29:26.265075  522370 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 10:29:26.265088  522370 command_runner.go:130] > default_sysctls = [
	I1206 10:29:26.265093  522370 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1206 10:29:26.265096  522370 command_runner.go:130] > ]
	I1206 10:29:26.265101  522370 command_runner.go:130] > # List of devices on the host that a
	I1206 10:29:26.265110  522370 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1206 10:29:26.265114  522370 command_runner.go:130] > # allowed_devices = [
	I1206 10:29:26.265118  522370 command_runner.go:130] > # 	"/dev/fuse",
	I1206 10:29:26.265123  522370 command_runner.go:130] > # 	"/dev/net/tun",
	I1206 10:29:26.265127  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265134  522370 command_runner.go:130] > # List of additional devices. specified as
	I1206 10:29:26.265142  522370 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1206 10:29:26.265150  522370 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1206 10:29:26.265156  522370 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 10:29:26.265160  522370 command_runner.go:130] > # additional_devices = [
	I1206 10:29:26.265164  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265169  522370 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1206 10:29:26.265179  522370 command_runner.go:130] > # cdi_spec_dirs = [
	I1206 10:29:26.265184  522370 command_runner.go:130] > # 	"/etc/cdi",
	I1206 10:29:26.265188  522370 command_runner.go:130] > # 	"/var/run/cdi",
	I1206 10:29:26.265194  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265200  522370 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1206 10:29:26.265206  522370 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1206 10:29:26.265213  522370 command_runner.go:130] > # Defaults to false.
	I1206 10:29:26.265218  522370 command_runner.go:130] > # device_ownership_from_security_context = false
	I1206 10:29:26.265225  522370 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1206 10:29:26.265233  522370 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1206 10:29:26.265237  522370 command_runner.go:130] > # hooks_dir = [
	I1206 10:29:26.265245  522370 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1206 10:29:26.265248  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265264  522370 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1206 10:29:26.265271  522370 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1206 10:29:26.265277  522370 command_runner.go:130] > # its default mounts from the following two files:
	I1206 10:29:26.265282  522370 command_runner.go:130] > #
	I1206 10:29:26.265293  522370 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1206 10:29:26.265302  522370 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1206 10:29:26.265309  522370 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1206 10:29:26.265312  522370 command_runner.go:130] > #
	I1206 10:29:26.265319  522370 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1206 10:29:26.265333  522370 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1206 10:29:26.265340  522370 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1206 10:29:26.265345  522370 command_runner.go:130] > #      only add mounts it finds in this file.
	I1206 10:29:26.265351  522370 command_runner.go:130] > #
	I1206 10:29:26.265355  522370 command_runner.go:130] > # default_mounts_file = ""
	I1206 10:29:26.265360  522370 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1206 10:29:26.265367  522370 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1206 10:29:26.265371  522370 command_runner.go:130] > # pids_limit = -1
	I1206 10:29:26.265378  522370 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1206 10:29:26.265386  522370 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1206 10:29:26.265392  522370 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1206 10:29:26.265403  522370 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1206 10:29:26.265407  522370 command_runner.go:130] > # log_size_max = -1
	I1206 10:29:26.265416  522370 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1206 10:29:26.265423  522370 command_runner.go:130] > # log_to_journald = false
	I1206 10:29:26.265431  522370 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1206 10:29:26.265437  522370 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1206 10:29:26.265448  522370 command_runner.go:130] > # Path to directory for container attach sockets.
	I1206 10:29:26.265453  522370 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1206 10:29:26.265458  522370 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1206 10:29:26.265464  522370 command_runner.go:130] > # bind_mount_prefix = ""
	I1206 10:29:26.265470  522370 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1206 10:29:26.265476  522370 command_runner.go:130] > # read_only = false
	I1206 10:29:26.265482  522370 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1206 10:29:26.265491  522370 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1206 10:29:26.265495  522370 command_runner.go:130] > # live configuration reload.
	I1206 10:29:26.265508  522370 command_runner.go:130] > # log_level = "info"
	I1206 10:29:26.265514  522370 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1206 10:29:26.265523  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.265529  522370 command_runner.go:130] > # log_filter = ""
	I1206 10:29:26.265536  522370 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1206 10:29:26.265542  522370 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1206 10:29:26.265548  522370 command_runner.go:130] > # separated by comma.
	I1206 10:29:26.265557  522370 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:29:26.265564  522370 command_runner.go:130] > # uid_mappings = ""
	I1206 10:29:26.265570  522370 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1206 10:29:26.265578  522370 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1206 10:29:26.265586  522370 command_runner.go:130] > # separated by comma.
	I1206 10:29:26.265597  522370 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:29:26.265602  522370 command_runner.go:130] > # gid_mappings = ""
	I1206 10:29:26.265611  522370 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1206 10:29:26.265620  522370 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 10:29:26.265626  522370 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 10:29:26.265635  522370 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:29:26.265642  522370 command_runner.go:130] > # minimum_mappable_uid = -1
	I1206 10:29:26.265648  522370 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1206 10:29:26.265656  522370 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 10:29:26.265663  522370 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 10:29:26.265680  522370 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:29:26.265684  522370 command_runner.go:130] > # minimum_mappable_gid = -1
	I1206 10:29:26.265691  522370 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1206 10:29:26.265701  522370 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1206 10:29:26.265707  522370 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1206 10:29:26.265713  522370 command_runner.go:130] > # ctr_stop_timeout = 30
	I1206 10:29:26.265719  522370 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1206 10:29:26.265727  522370 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1206 10:29:26.265733  522370 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1206 10:29:26.265740  522370 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1206 10:29:26.265747  522370 command_runner.go:130] > # drop_infra_ctr = true
	I1206 10:29:26.265754  522370 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1206 10:29:26.265768  522370 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1206 10:29:26.265780  522370 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1206 10:29:26.265787  522370 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1206 10:29:26.265794  522370 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1206 10:29:26.265801  522370 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1206 10:29:26.265809  522370 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1206 10:29:26.265814  522370 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1206 10:29:26.265818  522370 command_runner.go:130] > # shared_cpuset = ""
	I1206 10:29:26.265824  522370 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1206 10:29:26.265832  522370 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1206 10:29:26.265838  522370 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1206 10:29:26.265846  522370 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1206 10:29:26.265857  522370 command_runner.go:130] > # pinns_path = ""
	I1206 10:29:26.265863  522370 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1206 10:29:26.265869  522370 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1206 10:29:26.265874  522370 command_runner.go:130] > # enable_criu_support = true
	I1206 10:29:26.265881  522370 command_runner.go:130] > # Enable/disable the generation of the container,
	I1206 10:29:26.265887  522370 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1206 10:29:26.265894  522370 command_runner.go:130] > # enable_pod_events = false
	I1206 10:29:26.265901  522370 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1206 10:29:26.265906  522370 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1206 10:29:26.265910  522370 command_runner.go:130] > # default_runtime = "crun"
	I1206 10:29:26.265915  522370 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1206 10:29:26.265925  522370 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1206 10:29:26.265945  522370 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1206 10:29:26.265951  522370 command_runner.go:130] > # creation as a file is not desired either.
	I1206 10:29:26.265960  522370 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1206 10:29:26.265970  522370 command_runner.go:130] > # the hostname is being managed dynamically.
	I1206 10:29:26.265974  522370 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1206 10:29:26.265977  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265984  522370 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1206 10:29:26.265993  522370 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1206 10:29:26.265999  522370 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1206 10:29:26.266004  522370 command_runner.go:130] > # Each entry in the table should follow the format:
	I1206 10:29:26.266011  522370 command_runner.go:130] > #
	I1206 10:29:26.266019  522370 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1206 10:29:26.266024  522370 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1206 10:29:26.266030  522370 command_runner.go:130] > # runtime_type = "oci"
	I1206 10:29:26.266035  522370 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1206 10:29:26.266042  522370 command_runner.go:130] > # inherit_default_runtime = false
	I1206 10:29:26.266047  522370 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1206 10:29:26.266059  522370 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1206 10:29:26.266065  522370 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1206 10:29:26.266068  522370 command_runner.go:130] > # monitor_env = []
	I1206 10:29:26.266080  522370 command_runner.go:130] > # privileged_without_host_devices = false
	I1206 10:29:26.266084  522370 command_runner.go:130] > # allowed_annotations = []
	I1206 10:29:26.266090  522370 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1206 10:29:26.266094  522370 command_runner.go:130] > # no_sync_log = false
	I1206 10:29:26.266098  522370 command_runner.go:130] > # default_annotations = {}
	I1206 10:29:26.266105  522370 command_runner.go:130] > # stream_websockets = false
	I1206 10:29:26.266112  522370 command_runner.go:130] > # seccomp_profile = ""
	I1206 10:29:26.266145  522370 command_runner.go:130] > # Where:
	I1206 10:29:26.266155  522370 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1206 10:29:26.266162  522370 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1206 10:29:26.266168  522370 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1206 10:29:26.266182  522370 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1206 10:29:26.266186  522370 command_runner.go:130] > #   in $PATH.
	I1206 10:29:26.266192  522370 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1206 10:29:26.266199  522370 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1206 10:29:26.266206  522370 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1206 10:29:26.266212  522370 command_runner.go:130] > #   state.
	I1206 10:29:26.266218  522370 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1206 10:29:26.266224  522370 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1206 10:29:26.266232  522370 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1206 10:29:26.266239  522370 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1206 10:29:26.266247  522370 command_runner.go:130] > #   the values from the default runtime on load time.
	I1206 10:29:26.266254  522370 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1206 10:29:26.266265  522370 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1206 10:29:26.266275  522370 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1206 10:29:26.266283  522370 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1206 10:29:26.266287  522370 command_runner.go:130] > #   The currently recognized values are:
	I1206 10:29:26.266294  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1206 10:29:26.266304  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1206 10:29:26.266315  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1206 10:29:26.266324  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1206 10:29:26.266332  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1206 10:29:26.266339  522370 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1206 10:29:26.266348  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1206 10:29:26.266356  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1206 10:29:26.266368  522370 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1206 10:29:26.266375  522370 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1206 10:29:26.266382  522370 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1206 10:29:26.266388  522370 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1206 10:29:26.266394  522370 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1206 10:29:26.266410  522370 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1206 10:29:26.266417  522370 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1206 10:29:26.266425  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1206 10:29:26.266435  522370 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1206 10:29:26.266440  522370 command_runner.go:130] > #   deprecated option "conmon".
	I1206 10:29:26.266447  522370 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1206 10:29:26.266455  522370 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1206 10:29:26.266463  522370 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1206 10:29:26.266467  522370 command_runner.go:130] > #   should be moved to the container's cgroup
	I1206 10:29:26.266475  522370 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1206 10:29:26.266479  522370 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1206 10:29:26.266489  522370 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1206 10:29:26.266501  522370 command_runner.go:130] > #   conmon-rs by using:
	I1206 10:29:26.266510  522370 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1206 10:29:26.266520  522370 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1206 10:29:26.266531  522370 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1206 10:29:26.266542  522370 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1206 10:29:26.266552  522370 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1206 10:29:26.266559  522370 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1206 10:29:26.266571  522370 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1206 10:29:26.266585  522370 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1206 10:29:26.266593  522370 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1206 10:29:26.266603  522370 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1206 10:29:26.266610  522370 command_runner.go:130] > #   when a machine crash happens.
	I1206 10:29:26.266617  522370 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1206 10:29:26.266625  522370 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1206 10:29:26.266636  522370 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1206 10:29:26.266641  522370 command_runner.go:130] > #   seccomp profile for the runtime.
	I1206 10:29:26.266647  522370 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1206 10:29:26.266656  522370 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1206 10:29:26.266660  522370 command_runner.go:130] > #
	I1206 10:29:26.266665  522370 command_runner.go:130] > # Using the seccomp notifier feature:
	I1206 10:29:26.266675  522370 command_runner.go:130] > #
	I1206 10:29:26.266682  522370 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1206 10:29:26.266689  522370 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1206 10:29:26.266694  522370 command_runner.go:130] > #
	I1206 10:29:26.266701  522370 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1206 10:29:26.266708  522370 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1206 10:29:26.266711  522370 command_runner.go:130] > #
	I1206 10:29:26.266718  522370 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1206 10:29:26.266723  522370 command_runner.go:130] > # feature.
	I1206 10:29:26.266726  522370 command_runner.go:130] > #
	I1206 10:29:26.266732  522370 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1206 10:29:26.266739  522370 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1206 10:29:26.266747  522370 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1206 10:29:26.266754  522370 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1206 10:29:26.266763  522370 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1206 10:29:26.266768  522370 command_runner.go:130] > #
	I1206 10:29:26.266774  522370 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1206 10:29:26.266786  522370 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1206 10:29:26.266792  522370 command_runner.go:130] > #
	I1206 10:29:26.266800  522370 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1206 10:29:26.266806  522370 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1206 10:29:26.266809  522370 command_runner.go:130] > #
	I1206 10:29:26.266815  522370 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1206 10:29:26.266825  522370 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1206 10:29:26.266831  522370 command_runner.go:130] > # limitation.
	I1206 10:29:26.266835  522370 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1206 10:29:26.266848  522370 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1206 10:29:26.266853  522370 command_runner.go:130] > runtime_type = ""
	I1206 10:29:26.266856  522370 command_runner.go:130] > runtime_root = "/run/crun"
	I1206 10:29:26.266862  522370 command_runner.go:130] > inherit_default_runtime = false
	I1206 10:29:26.266868  522370 command_runner.go:130] > runtime_config_path = ""
	I1206 10:29:26.266873  522370 command_runner.go:130] > container_min_memory = ""
	I1206 10:29:26.266880  522370 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1206 10:29:26.266884  522370 command_runner.go:130] > monitor_cgroup = "pod"
	I1206 10:29:26.266889  522370 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 10:29:26.266892  522370 command_runner.go:130] > allowed_annotations = [
	I1206 10:29:26.266897  522370 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1206 10:29:26.266900  522370 command_runner.go:130] > ]
	I1206 10:29:26.266904  522370 command_runner.go:130] > privileged_without_host_devices = false
	I1206 10:29:26.266911  522370 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1206 10:29:26.266916  522370 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1206 10:29:26.266921  522370 command_runner.go:130] > runtime_type = ""
	I1206 10:29:26.266932  522370 command_runner.go:130] > runtime_root = "/run/runc"
	I1206 10:29:26.266939  522370 command_runner.go:130] > inherit_default_runtime = false
	I1206 10:29:26.266943  522370 command_runner.go:130] > runtime_config_path = ""
	I1206 10:29:26.266947  522370 command_runner.go:130] > container_min_memory = ""
	I1206 10:29:26.266952  522370 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1206 10:29:26.266961  522370 command_runner.go:130] > monitor_cgroup = "pod"
	I1206 10:29:26.266966  522370 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 10:29:26.266970  522370 command_runner.go:130] > privileged_without_host_devices = false
	I1206 10:29:26.266981  522370 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1206 10:29:26.266987  522370 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1206 10:29:26.266995  522370 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1206 10:29:26.267006  522370 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1206 10:29:26.267024  522370 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1206 10:29:26.267035  522370 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1206 10:29:26.267047  522370 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1206 10:29:26.267054  522370 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1206 10:29:26.267063  522370 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1206 10:29:26.267072  522370 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1206 10:29:26.267080  522370 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1206 10:29:26.267087  522370 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1206 10:29:26.267094  522370 command_runner.go:130] > # Example:
	I1206 10:29:26.267098  522370 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1206 10:29:26.267103  522370 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1206 10:29:26.267108  522370 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1206 10:29:26.267132  522370 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1206 10:29:26.267141  522370 command_runner.go:130] > # cpuset = "0-1"
	I1206 10:29:26.267145  522370 command_runner.go:130] > # cpushares = "5"
	I1206 10:29:26.267149  522370 command_runner.go:130] > # cpuquota = "1000"
	I1206 10:29:26.267152  522370 command_runner.go:130] > # cpuperiod = "100000"
	I1206 10:29:26.267156  522370 command_runner.go:130] > # cpulimit = "35"
	I1206 10:29:26.267159  522370 command_runner.go:130] > # Where:
	I1206 10:29:26.267165  522370 command_runner.go:130] > # The workload name is workload-type.
	I1206 10:29:26.267172  522370 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1206 10:29:26.267181  522370 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1206 10:29:26.267188  522370 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1206 10:29:26.267199  522370 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1206 10:29:26.267205  522370 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1206 10:29:26.267210  522370 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1206 10:29:26.267224  522370 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1206 10:29:26.267229  522370 command_runner.go:130] > # Default value is set to true
	I1206 10:29:26.267234  522370 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1206 10:29:26.267244  522370 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1206 10:29:26.267251  522370 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1206 10:29:26.267255  522370 command_runner.go:130] > # Default value is set to 'false'
	I1206 10:29:26.267260  522370 command_runner.go:130] > # disable_hostport_mapping = false
	I1206 10:29:26.267265  522370 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1206 10:29:26.267277  522370 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1206 10:29:26.267283  522370 command_runner.go:130] > # timezone = ""
	I1206 10:29:26.267290  522370 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1206 10:29:26.267293  522370 command_runner.go:130] > #
	I1206 10:29:26.267299  522370 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1206 10:29:26.267310  522370 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1206 10:29:26.267313  522370 command_runner.go:130] > [crio.image]
	I1206 10:29:26.267319  522370 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1206 10:29:26.267324  522370 command_runner.go:130] > # default_transport = "docker://"
	I1206 10:29:26.267332  522370 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1206 10:29:26.267339  522370 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1206 10:29:26.267343  522370 command_runner.go:130] > # global_auth_file = ""
	I1206 10:29:26.267351  522370 command_runner.go:130] > # The image used to instantiate infra containers.
	I1206 10:29:26.267359  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.267364  522370 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1206 10:29:26.267378  522370 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1206 10:29:26.267385  522370 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1206 10:29:26.267396  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.267401  522370 command_runner.go:130] > # pause_image_auth_file = ""
	I1206 10:29:26.267407  522370 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1206 10:29:26.267413  522370 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1206 10:29:26.267421  522370 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1206 10:29:26.267427  522370 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1206 10:29:26.267434  522370 command_runner.go:130] > # pause_command = "/pause"
	I1206 10:29:26.267440  522370 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1206 10:29:26.267447  522370 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1206 10:29:26.267455  522370 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1206 10:29:26.267461  522370 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1206 10:29:26.267471  522370 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1206 10:29:26.267480  522370 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1206 10:29:26.267484  522370 command_runner.go:130] > # pinned_images = [
	I1206 10:29:26.267488  522370 command_runner.go:130] > # ]
	I1206 10:29:26.267494  522370 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1206 10:29:26.267502  522370 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1206 10:29:26.267509  522370 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1206 10:29:26.267517  522370 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1206 10:29:26.267525  522370 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1206 10:29:26.267530  522370 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1206 10:29:26.267538  522370 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1206 10:29:26.267548  522370 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1206 10:29:26.267556  522370 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1206 10:29:26.267566  522370 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1206 10:29:26.267572  522370 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1206 10:29:26.267579  522370 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1206 10:29:26.267587  522370 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1206 10:29:26.267594  522370 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1206 10:29:26.267597  522370 command_runner.go:130] > # changing them here.
	I1206 10:29:26.267603  522370 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1206 10:29:26.267608  522370 command_runner.go:130] > # insecure_registries = [
	I1206 10:29:26.267613  522370 command_runner.go:130] > # ]
	I1206 10:29:26.267620  522370 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1206 10:29:26.267637  522370 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1206 10:29:26.267641  522370 command_runner.go:130] > # image_volumes = "mkdir"
	I1206 10:29:26.267646  522370 command_runner.go:130] > # Temporary directory to use for storing big files
	I1206 10:29:26.267671  522370 command_runner.go:130] > # big_files_temporary_dir = ""
	I1206 10:29:26.267678  522370 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1206 10:29:26.267687  522370 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1206 10:29:26.267699  522370 command_runner.go:130] > # auto_reload_registries = false
	I1206 10:29:26.267706  522370 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1206 10:29:26.267714  522370 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1206 10:29:26.267723  522370 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1206 10:29:26.267732  522370 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1206 10:29:26.267739  522370 command_runner.go:130] > # The mode of short name resolution.
	I1206 10:29:26.267746  522370 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1206 10:29:26.267753  522370 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1206 10:29:26.267758  522370 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1206 10:29:26.267766  522370 command_runner.go:130] > # short_name_mode = "enforcing"
	I1206 10:29:26.267775  522370 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1206 10:29:26.267781  522370 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1206 10:29:26.267788  522370 command_runner.go:130] > # oci_artifact_mount_support = true
	I1206 10:29:26.267795  522370 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1206 10:29:26.267798  522370 command_runner.go:130] > # CNI plugins.
	I1206 10:29:26.267802  522370 command_runner.go:130] > [crio.network]
	I1206 10:29:26.267808  522370 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1206 10:29:26.267816  522370 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1206 10:29:26.267820  522370 command_runner.go:130] > # cni_default_network = ""
	I1206 10:29:26.267826  522370 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1206 10:29:26.267836  522370 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1206 10:29:26.267842  522370 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1206 10:29:26.267845  522370 command_runner.go:130] > # plugin_dirs = [
	I1206 10:29:26.267853  522370 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1206 10:29:26.267856  522370 command_runner.go:130] > # ]
	I1206 10:29:26.267861  522370 command_runner.go:130] > # List of included pod metrics.
	I1206 10:29:26.267867  522370 command_runner.go:130] > # included_pod_metrics = [
	I1206 10:29:26.267870  522370 command_runner.go:130] > # ]
	I1206 10:29:26.267879  522370 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1206 10:29:26.267885  522370 command_runner.go:130] > [crio.metrics]
	I1206 10:29:26.267890  522370 command_runner.go:130] > # Globally enable or disable metrics support.
	I1206 10:29:26.267897  522370 command_runner.go:130] > # enable_metrics = false
	I1206 10:29:26.267902  522370 command_runner.go:130] > # Specify enabled metrics collectors.
	I1206 10:29:26.267906  522370 command_runner.go:130] > # Per default all metrics are enabled.
	I1206 10:29:26.267912  522370 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1206 10:29:26.267919  522370 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1206 10:29:26.267925  522370 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1206 10:29:26.267938  522370 command_runner.go:130] > # metrics_collectors = [
	I1206 10:29:26.267943  522370 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1206 10:29:26.267947  522370 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1206 10:29:26.267951  522370 command_runner.go:130] > # 	"containers_oom_total",
	I1206 10:29:26.267954  522370 command_runner.go:130] > # 	"processes_defunct",
	I1206 10:29:26.267958  522370 command_runner.go:130] > # 	"operations_total",
	I1206 10:29:26.267962  522370 command_runner.go:130] > # 	"operations_latency_seconds",
	I1206 10:29:26.267966  522370 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1206 10:29:26.267970  522370 command_runner.go:130] > # 	"operations_errors_total",
	I1206 10:29:26.267977  522370 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1206 10:29:26.267981  522370 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1206 10:29:26.267986  522370 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1206 10:29:26.267990  522370 command_runner.go:130] > # 	"image_pulls_success_total",
	I1206 10:29:26.267993  522370 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1206 10:29:26.267997  522370 command_runner.go:130] > # 	"containers_oom_count_total",
	I1206 10:29:26.268003  522370 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1206 10:29:26.268007  522370 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1206 10:29:26.268011  522370 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1206 10:29:26.268014  522370 command_runner.go:130] > # ]
	I1206 10:29:26.268020  522370 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1206 10:29:26.268024  522370 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1206 10:29:26.268029  522370 command_runner.go:130] > # The port on which the metrics server will listen.
	I1206 10:29:26.268032  522370 command_runner.go:130] > # metrics_port = 9090
	I1206 10:29:26.268037  522370 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1206 10:29:26.268041  522370 command_runner.go:130] > # metrics_socket = ""
	I1206 10:29:26.268046  522370 command_runner.go:130] > # The certificate for the secure metrics server.
	I1206 10:29:26.268052  522370 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1206 10:29:26.268061  522370 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1206 10:29:26.268070  522370 command_runner.go:130] > # certificate on any modification event.
	I1206 10:29:26.268074  522370 command_runner.go:130] > # metrics_cert = ""
	I1206 10:29:26.268079  522370 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1206 10:29:26.268086  522370 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1206 10:29:26.268090  522370 command_runner.go:130] > # metrics_key = ""
	I1206 10:29:26.268099  522370 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1206 10:29:26.268106  522370 command_runner.go:130] > [crio.tracing]
	I1206 10:29:26.268112  522370 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1206 10:29:26.268116  522370 command_runner.go:130] > # enable_tracing = false
	I1206 10:29:26.268121  522370 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1206 10:29:26.268127  522370 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1206 10:29:26.268135  522370 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1206 10:29:26.268143  522370 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1206 10:29:26.268147  522370 command_runner.go:130] > # CRI-O NRI configuration.
	I1206 10:29:26.268150  522370 command_runner.go:130] > [crio.nri]
	I1206 10:29:26.268155  522370 command_runner.go:130] > # Globally enable or disable NRI.
	I1206 10:29:26.268158  522370 command_runner.go:130] > # enable_nri = true
	I1206 10:29:26.268162  522370 command_runner.go:130] > # NRI socket to listen on.
	I1206 10:29:26.268166  522370 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1206 10:29:26.268170  522370 command_runner.go:130] > # NRI plugin directory to use.
	I1206 10:29:26.268174  522370 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1206 10:29:26.268181  522370 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1206 10:29:26.268187  522370 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1206 10:29:26.268195  522370 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1206 10:29:26.268252  522370 command_runner.go:130] > # nri_disable_connections = false
	I1206 10:29:26.268260  522370 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1206 10:29:26.268265  522370 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1206 10:29:26.268270  522370 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1206 10:29:26.268274  522370 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1206 10:29:26.268287  522370 command_runner.go:130] > # NRI default validator configuration.
	I1206 10:29:26.268294  522370 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1206 10:29:26.268307  522370 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1206 10:29:26.268312  522370 command_runner.go:130] > # can be restricted/rejected:
	I1206 10:29:26.268322  522370 command_runner.go:130] > # - OCI hook injection
	I1206 10:29:26.268327  522370 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1206 10:29:26.268333  522370 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1206 10:29:26.268340  522370 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1206 10:29:26.268344  522370 command_runner.go:130] > # - adjustment of linux namespaces
	I1206 10:29:26.268356  522370 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1206 10:29:26.268363  522370 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1206 10:29:26.268368  522370 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1206 10:29:26.268375  522370 command_runner.go:130] > #
	I1206 10:29:26.268380  522370 command_runner.go:130] > # [crio.nri.default_validator]
	I1206 10:29:26.268384  522370 command_runner.go:130] > # nri_enable_default_validator = false
	I1206 10:29:26.268397  522370 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1206 10:29:26.268403  522370 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1206 10:29:26.268408  522370 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1206 10:29:26.268416  522370 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1206 10:29:26.268421  522370 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1206 10:29:26.268425  522370 command_runner.go:130] > # nri_validator_required_plugins = [
	I1206 10:29:26.268431  522370 command_runner.go:130] > # ]
	I1206 10:29:26.268436  522370 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1206 10:29:26.268442  522370 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1206 10:29:26.268446  522370 command_runner.go:130] > [crio.stats]
	I1206 10:29:26.268454  522370 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1206 10:29:26.268465  522370 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1206 10:29:26.268469  522370 command_runner.go:130] > # stats_collection_period = 0
	I1206 10:29:26.268475  522370 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1206 10:29:26.268484  522370 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1206 10:29:26.268489  522370 command_runner.go:130] > # collection_period = 0
	I1206 10:29:26.268581  522370 cni.go:84] Creating CNI manager for ""
	I1206 10:29:26.268595  522370 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:29:26.268620  522370 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:29:26.268646  522370 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-123579 NodeName:functional-123579 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:29:26.268768  522370 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-123579"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:29:26.268849  522370 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:29:26.276198  522370 command_runner.go:130] > kubeadm
	I1206 10:29:26.276217  522370 command_runner.go:130] > kubectl
	I1206 10:29:26.276221  522370 command_runner.go:130] > kubelet
	I1206 10:29:26.277128  522370 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:29:26.277245  522370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:29:26.285085  522370 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:29:26.297894  522370 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:29:26.310811  522370 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1206 10:29:26.323875  522370 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:29:26.327560  522370 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1206 10:29:26.327877  522370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:29:26.463333  522370 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:29:27.181623  522370 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579 for IP: 192.168.49.2
	I1206 10:29:27.181646  522370 certs.go:195] generating shared ca certs ...
	I1206 10:29:27.181662  522370 certs.go:227] acquiring lock for ca certs: {Name:mk654f77abd8383620ce6ddae56f2a6a8c1d96d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:29:27.181794  522370 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key
	I1206 10:29:27.181841  522370 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key
	I1206 10:29:27.181855  522370 certs.go:257] generating profile certs ...
	I1206 10:29:27.181981  522370 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key
	I1206 10:29:27.182049  522370 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key.fda7c087
	I1206 10:29:27.182120  522370 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key
	I1206 10:29:27.182139  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 10:29:27.182178  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 10:29:27.182195  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 10:29:27.182206  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 10:29:27.182221  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 10:29:27.182231  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 10:29:27.182242  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 10:29:27.182252  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 10:29:27.182310  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem (1338 bytes)
	W1206 10:29:27.182343  522370 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068_empty.pem, impossibly tiny 0 bytes
	I1206 10:29:27.182351  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 10:29:27.182391  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:29:27.182420  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:29:27.182445  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem (1675 bytes)
	I1206 10:29:27.182502  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:29:27.182537  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.182553  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem -> /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.182567  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.183155  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:29:27.204776  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:29:27.223807  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:29:27.246828  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 10:29:27.269763  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:29:27.290536  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:29:27.308147  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:29:27.326269  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:29:27.344314  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:29:27.361949  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem --> /usr/share/ca-certificates/488068.pem (1338 bytes)
	I1206 10:29:27.379296  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /usr/share/ca-certificates/4880682.pem (1708 bytes)
	I1206 10:29:27.396825  522370 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:29:27.409539  522370 ssh_runner.go:195] Run: openssl version
	I1206 10:29:27.415501  522370 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1206 10:29:27.415885  522370 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.423483  522370 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488068.pem /etc/ssl/certs/488068.pem
	I1206 10:29:27.431381  522370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.435336  522370 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.435420  522370 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.435491  522370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.477997  522370 command_runner.go:130] > 51391683
	I1206 10:29:27.478450  522370 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:29:27.485910  522370 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.493199  522370 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4880682.pem /etc/ssl/certs/4880682.pem
	I1206 10:29:27.500533  522370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.504197  522370 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.504254  522370 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.504314  522370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.549795  522370 command_runner.go:130] > 3ec20f2e
	I1206 10:29:27.550294  522370 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:29:27.557856  522370 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.565301  522370 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:29:27.572772  522370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.576768  522370 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.576853  522370 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.576925  522370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.618106  522370 command_runner.go:130] > b5213941
	I1206 10:29:27.618536  522370 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:29:27.626130  522370 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:29:27.629702  522370 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:29:27.629728  522370 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1206 10:29:27.629736  522370 command_runner.go:130] > Device: 259,1	Inode: 3640487     Links: 1
	I1206 10:29:27.629742  522370 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:29:27.629749  522370 command_runner.go:130] > Access: 2025-12-06 10:25:18.913466133 +0000
	I1206 10:29:27.629754  522370 command_runner.go:130] > Modify: 2025-12-06 10:21:14.154593310 +0000
	I1206 10:29:27.629758  522370 command_runner.go:130] > Change: 2025-12-06 10:21:14.154593310 +0000
	I1206 10:29:27.629764  522370 command_runner.go:130] >  Birth: 2025-12-06 10:21:14.154593310 +0000
	I1206 10:29:27.629823  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:29:27.670498  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.670941  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:29:27.711871  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.712351  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:29:27.753204  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.753665  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:29:27.795554  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.796089  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:29:27.836809  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.837203  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:29:27.878291  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.878357  522370 kubeadm.go:401] StartCluster: {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:29:27.878433  522370 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:29:27.878503  522370 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:29:27.905835  522370 cri.go:89] found id: ""
	I1206 10:29:27.905910  522370 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:29:27.912750  522370 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 10:29:27.912773  522370 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 10:29:27.912780  522370 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 10:29:27.913690  522370 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:29:27.913706  522370 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:29:27.913783  522370 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:29:27.921335  522370 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:29:27.921755  522370 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-123579" does not appear in /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:27.921867  522370 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-484819/kubeconfig needs updating (will repair): [kubeconfig missing "functional-123579" cluster setting kubeconfig missing "functional-123579" context setting]
	I1206 10:29:27.922200  522370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/kubeconfig: {Name:mk884a72161ed5cd0cfdbffc4a21f277282d705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:29:27.922608  522370 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:27.922766  522370 kapi.go:59] client config for functional-123579: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key", CAFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:29:27.923311  522370 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 10:29:27.923332  522370 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 10:29:27.923338  522370 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 10:29:27.923344  522370 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 10:29:27.923348  522370 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 10:29:27.923710  522370 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:29:27.923805  522370 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1206 10:29:27.932172  522370 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1206 10:29:27.932206  522370 kubeadm.go:602] duration metric: took 18.493373ms to restartPrimaryControlPlane
	I1206 10:29:27.932216  522370 kubeadm.go:403] duration metric: took 53.86688ms to StartCluster
	I1206 10:29:27.932230  522370 settings.go:142] acquiring lock: {Name:mk7eec112652eae38dac4afce804445d9092bd29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:29:27.932300  522370 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:27.932906  522370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/kubeconfig: {Name:mk884a72161ed5cd0cfdbffc4a21f277282d705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:29:27.933111  522370 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:29:27.933400  522370 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:29:27.933457  522370 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:29:27.933598  522370 addons.go:70] Setting storage-provisioner=true in profile "functional-123579"
	I1206 10:29:27.933615  522370 addons.go:239] Setting addon storage-provisioner=true in "functional-123579"
	I1206 10:29:27.933640  522370 host.go:66] Checking if "functional-123579" exists ...
	I1206 10:29:27.933662  522370 addons.go:70] Setting default-storageclass=true in profile "functional-123579"
	I1206 10:29:27.933709  522370 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-123579"
	I1206 10:29:27.934067  522370 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:29:27.934105  522370 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:29:27.937180  522370 out.go:179] * Verifying Kubernetes components...
	I1206 10:29:27.943300  522370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:29:27.955394  522370 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:27.955630  522370 kapi.go:59] client config for functional-123579: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key", CAFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:29:27.955941  522370 addons.go:239] Setting addon default-storageclass=true in "functional-123579"
	I1206 10:29:27.955970  522370 host.go:66] Checking if "functional-123579" exists ...
	I1206 10:29:27.956408  522370 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:29:27.980014  522370 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:29:27.983923  522370 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:27.983954  522370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:29:27.984026  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:27.996144  522370 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:27.996165  522370 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:29:27.996228  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:28.024613  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:28.044906  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:28.158003  522370 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:29:28.171055  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:28.191069  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:28.930363  522370 node_ready.go:35] waiting up to 6m0s for node "functional-123579" to be "Ready" ...
	I1206 10:29:28.930490  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:28.930625  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:28.930666  522370 retry.go:31] will retry after 220.153302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:28.930749  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:28.930787  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:28.930813  522370 retry.go:31] will retry after 205.296978ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:28.930893  522370 type.go:168] "Request Body" body=""
	I1206 10:29:28.930961  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:28.931278  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:29.136761  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:29.151269  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:29.213820  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:29.217541  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.217581  522370 retry.go:31] will retry after 414.855546ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.235243  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:29.235363  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.235412  522370 retry.go:31] will retry after 542.074768ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.431607  522370 type.go:168] "Request Body" body=""
	I1206 10:29:29.431755  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:29.432098  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:29.633557  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:29.704871  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:29.715208  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.715276  522370 retry.go:31] will retry after 512.072151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.778572  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:29.842567  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:29.842631  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.842656  522370 retry.go:31] will retry after 453.896864ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.930817  522370 type.go:168] "Request Body" body=""
	I1206 10:29:29.930917  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:29.931386  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:30.227644  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:30.292361  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:30.292404  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:30.292441  522370 retry.go:31] will retry after 965.22043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:30.297573  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:30.354035  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:30.357760  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:30.357796  522370 retry.go:31] will retry after 830.21573ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:30.430970  522370 type.go:168] "Request Body" body=""
	I1206 10:29:30.431039  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:30.431358  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:30.930753  522370 type.go:168] "Request Body" body=""
	I1206 10:29:30.930859  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:30.931201  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:30.931272  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:31.188810  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:31.258540  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:31.280251  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:31.280382  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:31.280411  522370 retry.go:31] will retry after 670.25639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:31.331402  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:31.331517  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:31.331545  522370 retry.go:31] will retry after 1.065706699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:31.430665  522370 type.go:168] "Request Body" body=""
	I1206 10:29:31.430772  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:31.431166  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:31.930712  522370 type.go:168] "Request Body" body=""
	I1206 10:29:31.930893  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:31.931401  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:31.951563  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:32.028942  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:32.028998  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:32.029018  522370 retry.go:31] will retry after 2.122665166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:32.397466  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:32.431043  522370 type.go:168] "Request Body" body=""
	I1206 10:29:32.431193  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:32.431584  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:32.458856  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:32.458892  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:32.458911  522370 retry.go:31] will retry after 1.728877951s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:32.931628  522370 type.go:168] "Request Body" body=""
	I1206 10:29:32.931705  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:32.932104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:32.932161  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:33.430893  522370 type.go:168] "Request Body" body=""
	I1206 10:29:33.430960  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:33.431324  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:33.930780  522370 type.go:168] "Request Body" body=""
	I1206 10:29:33.930858  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:33.931279  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:34.152755  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:34.188350  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:34.249027  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:34.249069  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:34.249090  522370 retry.go:31] will retry after 3.684646027s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:34.294198  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:34.294244  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:34.294296  522370 retry.go:31] will retry after 1.427612825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:34.431504  522370 type.go:168] "Request Body" body=""
	I1206 10:29:34.431583  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:34.431952  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:34.930685  522370 type.go:168] "Request Body" body=""
	I1206 10:29:34.930753  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:34.931043  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:35.430737  522370 type.go:168] "Request Body" body=""
	I1206 10:29:35.430834  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:35.431191  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:35.431258  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:35.722778  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:35.786215  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:35.786258  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:35.786277  522370 retry.go:31] will retry after 5.772571648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:35.931559  522370 type.go:168] "Request Body" body=""
	I1206 10:29:35.931640  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:35.931966  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:36.431586  522370 type.go:168] "Request Body" body=""
	I1206 10:29:36.431654  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:36.431914  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:36.930676  522370 type.go:168] "Request Body" body=""
	I1206 10:29:36.930756  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:36.931086  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:37.430781  522370 type.go:168] "Request Body" body=""
	I1206 10:29:37.430858  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:37.431219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:37.931472  522370 type.go:168] "Request Body" body=""
	I1206 10:29:37.931560  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:37.931882  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:37.931937  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:37.934240  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:38.012005  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:38.012049  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:38.012071  522370 retry.go:31] will retry after 2.264254307s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:38.430647  522370 type.go:168] "Request Body" body=""
	I1206 10:29:38.430724  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:38.431052  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:38.930775  522370 type.go:168] "Request Body" body=""
	I1206 10:29:38.930848  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:38.931203  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:39.430809  522370 type.go:168] "Request Body" body=""
	I1206 10:29:39.430884  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:39.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:39.930814  522370 type.go:168] "Request Body" body=""
	I1206 10:29:39.930888  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:39.931197  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:40.276629  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:40.338233  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:40.338274  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:40.338294  522370 retry.go:31] will retry after 6.465617702s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:40.431489  522370 type.go:168] "Request Body" body=""
	I1206 10:29:40.431563  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:40.431893  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:40.431948  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:40.931681  522370 type.go:168] "Request Body" body=""
	I1206 10:29:40.931758  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:40.932017  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:41.430778  522370 type.go:168] "Request Body" body=""
	I1206 10:29:41.430862  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:41.431219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:41.559542  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:41.618815  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:41.618852  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:41.618871  522370 retry.go:31] will retry after 5.212992024s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:41.931382  522370 type.go:168] "Request Body" body=""
	I1206 10:29:41.931461  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:41.931787  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:42.431525  522370 type.go:168] "Request Body" body=""
	I1206 10:29:42.431601  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:42.431866  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:42.931428  522370 type.go:168] "Request Body" body=""
	I1206 10:29:42.931503  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:42.931826  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:42.931883  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:43.431618  522370 type.go:168] "Request Body" body=""
	I1206 10:29:43.431692  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:43.432027  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:43.931348  522370 type.go:168] "Request Body" body=""
	I1206 10:29:43.931423  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:43.931690  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:44.431562  522370 type.go:168] "Request Body" body=""
	I1206 10:29:44.431652  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:44.431999  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:44.930672  522370 type.go:168] "Request Body" body=""
	I1206 10:29:44.930749  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:44.931083  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:45.430826  522370 type.go:168] "Request Body" body=""
	I1206 10:29:45.430904  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:45.431191  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:45.431243  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:45.930931  522370 type.go:168] "Request Body" body=""
	I1206 10:29:45.931023  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:45.931426  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:46.430763  522370 type.go:168] "Request Body" body=""
	I1206 10:29:46.430842  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:46.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:46.804868  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:46.832399  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:46.865940  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:46.865975  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:46.865994  522370 retry.go:31] will retry after 4.982943882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:46.906567  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:46.906612  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:46.906632  522370 retry.go:31] will retry after 5.755281988s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:46.930748  522370 type.go:168] "Request Body" body=""
	I1206 10:29:46.930817  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:46.931156  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:47.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:29:47.430851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:47.431185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:47.931383  522370 type.go:168] "Request Body" body=""
	I1206 10:29:47.931460  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:47.931792  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:47.931843  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:48.431576  522370 type.go:168] "Request Body" body=""
	I1206 10:29:48.431652  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:48.431909  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:48.931675  522370 type.go:168] "Request Body" body=""
	I1206 10:29:48.931755  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:48.932083  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:49.430777  522370 type.go:168] "Request Body" body=""
	I1206 10:29:49.430862  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:49.431211  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:49.930899  522370 type.go:168] "Request Body" body=""
	I1206 10:29:49.930969  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:49.931292  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:50.430989  522370 type.go:168] "Request Body" body=""
	I1206 10:29:50.431065  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:50.431426  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:50.431484  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:50.930772  522370 type.go:168] "Request Body" body=""
	I1206 10:29:50.930857  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:50.931213  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:51.430766  522370 type.go:168] "Request Body" body=""
	I1206 10:29:51.430838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:51.431095  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:51.849751  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:51.909824  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:51.909861  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:51.909882  522370 retry.go:31] will retry after 17.161477779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:51.930951  522370 type.go:168] "Request Body" body=""
	I1206 10:29:51.931035  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:51.931342  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:52.431051  522370 type.go:168] "Request Body" body=""
	I1206 10:29:52.431146  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:52.431458  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:52.431512  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:52.663117  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:52.730608  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:52.730656  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:52.730678  522370 retry.go:31] will retry after 12.860735555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:52.931180  522370 type.go:168] "Request Body" body=""
	I1206 10:29:52.931254  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:52.931513  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:53.431586  522370 type.go:168] "Request Body" body=""
	I1206 10:29:53.431665  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:53.432017  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:53.930759  522370 type.go:168] "Request Body" body=""
	I1206 10:29:53.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:53.931169  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:54.430719  522370 type.go:168] "Request Body" body=""
	I1206 10:29:54.430787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:54.431095  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:54.930744  522370 type.go:168] "Request Body" body=""
	I1206 10:29:54.930824  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:54.931164  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:54.931216  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:55.430912  522370 type.go:168] "Request Body" body=""
	I1206 10:29:55.430990  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:55.431336  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:55.930734  522370 type.go:168] "Request Body" body=""
	I1206 10:29:55.930815  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:55.931104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:56.430752  522370 type.go:168] "Request Body" body=""
	I1206 10:29:56.430830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:56.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:56.930913  522370 type.go:168] "Request Body" body=""
	I1206 10:29:56.931011  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:56.931387  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:56.931449  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:57.430732  522370 type.go:168] "Request Body" body=""
	I1206 10:29:57.430809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:57.431149  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:57.931360  522370 type.go:168] "Request Body" body=""
	I1206 10:29:57.931442  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:57.931792  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:58.431399  522370 type.go:168] "Request Body" body=""
	I1206 10:29:58.431472  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:58.431799  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:58.931551  522370 type.go:168] "Request Body" body=""
	I1206 10:29:58.931619  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:58.931871  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:58.931909  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:59.431652  522370 type.go:168] "Request Body" body=""
	I1206 10:29:59.431735  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:59.432062  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:59.930743  522370 type.go:168] "Request Body" body=""
	I1206 10:29:59.930819  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:59.931185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:00.449420  522370 type.go:168] "Request Body" body=""
	I1206 10:30:00.449497  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:00.449815  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:00.931632  522370 type.go:168] "Request Body" body=""
	I1206 10:30:00.931721  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:00.932114  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:00.932186  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:01.430891  522370 type.go:168] "Request Body" body=""
	I1206 10:30:01.430971  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:01.431362  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:01.930910  522370 type.go:168] "Request Body" body=""
	I1206 10:30:01.930981  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:01.931281  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:02.430983  522370 type.go:168] "Request Body" body=""
	I1206 10:30:02.431111  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:02.431463  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:02.931309  522370 type.go:168] "Request Body" body=""
	I1206 10:30:02.931390  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:02.931736  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:03.431532  522370 type.go:168] "Request Body" body=""
	I1206 10:30:03.431608  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:03.431873  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:03.431923  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:03.931662  522370 type.go:168] "Request Body" body=""
	I1206 10:30:03.931740  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:03.932084  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:04.430758  522370 type.go:168] "Request Body" body=""
	I1206 10:30:04.430838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:04.431226  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:04.930979  522370 type.go:168] "Request Body" body=""
	I1206 10:30:04.931048  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:04.931324  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:05.430768  522370 type.go:168] "Request Body" body=""
	I1206 10:30:05.430842  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:05.431235  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:05.591568  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:05.650107  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:05.653722  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:05.653756  522370 retry.go:31] will retry after 16.31009922s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:05.931225  522370 type.go:168] "Request Body" body=""
	I1206 10:30:05.931303  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:05.931640  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:05.931697  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:06.431453  522370 type.go:168] "Request Body" body=""
	I1206 10:30:06.431523  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:06.431774  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:06.931557  522370 type.go:168] "Request Body" body=""
	I1206 10:30:06.931629  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:06.931951  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:07.431619  522370 type.go:168] "Request Body" body=""
	I1206 10:30:07.431700  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:07.432067  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:07.931280  522370 type.go:168] "Request Body" body=""
	I1206 10:30:07.931358  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:07.931625  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:08.431487  522370 type.go:168] "Request Body" body=""
	I1206 10:30:08.431561  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:08.431928  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:08.431989  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:08.930675  522370 type.go:168] "Request Body" body=""
	I1206 10:30:08.930751  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:08.931076  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:09.072554  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:09.131495  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:09.131531  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:09.131550  522370 retry.go:31] will retry after 16.873374267s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:09.430840  522370 type.go:168] "Request Body" body=""
	I1206 10:30:09.430908  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:09.431218  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:09.930794  522370 type.go:168] "Request Body" body=""
	I1206 10:30:09.930868  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:09.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:10.430728  522370 type.go:168] "Request Body" body=""
	I1206 10:30:10.430802  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:10.431168  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:10.930730  522370 type.go:168] "Request Body" body=""
	I1206 10:30:10.930805  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:10.931062  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:10.931111  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:11.430884  522370 type.go:168] "Request Body" body=""
	I1206 10:30:11.430959  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:11.431276  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:11.930807  522370 type.go:168] "Request Body" body=""
	I1206 10:30:11.930877  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:11.931199  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:12.430821  522370 type.go:168] "Request Body" body=""
	I1206 10:30:12.430897  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:12.431230  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:12.931320  522370 type.go:168] "Request Body" body=""
	I1206 10:30:12.931390  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:12.931738  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:12.931801  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:13.431588  522370 type.go:168] "Request Body" body=""
	I1206 10:30:13.431660  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:13.432007  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:13.930697  522370 type.go:168] "Request Body" body=""
	I1206 10:30:13.930795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:13.931074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:14.430876  522370 type.go:168] "Request Body" body=""
	I1206 10:30:14.430958  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:14.431286  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:14.930809  522370 type.go:168] "Request Body" body=""
	I1206 10:30:14.930888  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:14.931234  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:15.430953  522370 type.go:168] "Request Body" body=""
	I1206 10:30:15.431021  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:15.431299  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:15.431359  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:15.930760  522370 type.go:168] "Request Body" body=""
	I1206 10:30:15.930854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:15.931202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:16.430790  522370 type.go:168] "Request Body" body=""
	I1206 10:30:16.430862  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:16.431183  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:16.930736  522370 type.go:168] "Request Body" body=""
	I1206 10:30:16.930809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:16.931077  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:17.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:30:17.430824  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:17.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:17.931237  522370 type.go:168] "Request Body" body=""
	I1206 10:30:17.931314  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:17.931645  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:17.931700  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:18.431393  522370 type.go:168] "Request Body" body=""
	I1206 10:30:18.431479  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:18.431748  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:18.931581  522370 type.go:168] "Request Body" body=""
	I1206 10:30:18.931653  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:18.931971  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:19.430702  522370 type.go:168] "Request Body" body=""
	I1206 10:30:19.430780  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:19.431097  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:19.930811  522370 type.go:168] "Request Body" body=""
	I1206 10:30:19.930888  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:19.931178  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:20.430768  522370 type.go:168] "Request Body" body=""
	I1206 10:30:20.430839  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:20.431197  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:20.431259  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:20.930943  522370 type.go:168] "Request Body" body=""
	I1206 10:30:20.931019  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:20.931387  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:21.431075  522370 type.go:168] "Request Body" body=""
	I1206 10:30:21.431159  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:21.431476  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:21.930790  522370 type.go:168] "Request Body" body=""
	I1206 10:30:21.930867  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:21.931207  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:21.964425  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:22.031284  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:22.031334  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:22.031356  522370 retry.go:31] will retry after 35.791693435s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:22.430787  522370 type.go:168] "Request Body" body=""
	I1206 10:30:22.430867  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:22.431181  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:22.930968  522370 type.go:168] "Request Body" body=""
	I1206 10:30:22.931043  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:22.931326  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:22.931374  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:23.430789  522370 type.go:168] "Request Body" body=""
	I1206 10:30:23.430884  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:23.431214  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:23.930931  522370 type.go:168] "Request Body" body=""
	I1206 10:30:23.931004  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:23.931354  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:24.430922  522370 type.go:168] "Request Body" body=""
	I1206 10:30:24.430996  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:24.431280  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:24.930769  522370 type.go:168] "Request Body" body=""
	I1206 10:30:24.930844  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:24.931166  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:25.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:30:25.430829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:25.431168  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:25.431230  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:25.931005  522370 type.go:168] "Request Body" body=""
	I1206 10:30:25.931194  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:25.932226  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:26.005763  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:26.074782  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:26.074834  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:26.074855  522370 retry.go:31] will retry after 34.92165894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:26.431288  522370 type.go:168] "Request Body" body=""
	I1206 10:30:26.431390  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:26.431714  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:26.931353  522370 type.go:168] "Request Body" body=""
	I1206 10:30:26.931426  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:26.931758  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:27.431397  522370 type.go:168] "Request Body" body=""
	I1206 10:30:27.431473  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:27.431770  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:27.431821  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:27.931640  522370 type.go:168] "Request Body" body=""
	I1206 10:30:27.931715  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:27.932047  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:28.431697  522370 type.go:168] "Request Body" body=""
	I1206 10:30:28.431771  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:28.432103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:28.930727  522370 type.go:168] "Request Body" body=""
	I1206 10:30:28.930800  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:28.931097  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:29.430756  522370 type.go:168] "Request Body" body=""
	I1206 10:30:29.430856  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:29.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:29.930774  522370 type.go:168] "Request Body" body=""
	I1206 10:30:29.930850  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:29.931176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:29.931223  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:30.430841  522370 type.go:168] "Request Body" body=""
	I1206 10:30:30.430907  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:30.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:30.930749  522370 type.go:168] "Request Body" body=""
	I1206 10:30:30.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:30.931181  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:31.430916  522370 type.go:168] "Request Body" body=""
	I1206 10:30:31.431010  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:31.431428  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:31.931099  522370 type.go:168] "Request Body" body=""
	I1206 10:30:31.931194  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:31.931454  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:31.931504  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:32.431294  522370 type.go:168] "Request Body" body=""
	I1206 10:30:32.431377  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:32.431741  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:32.931507  522370 type.go:168] "Request Body" body=""
	I1206 10:30:32.931587  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:32.931910  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:33.431612  522370 type.go:168] "Request Body" body=""
	I1206 10:30:33.431689  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:33.431967  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:33.930699  522370 type.go:168] "Request Body" body=""
	I1206 10:30:33.930774  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:33.931115  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:34.430875  522370 type.go:168] "Request Body" body=""
	I1206 10:30:34.430956  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:34.431328  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:34.431399  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:34.930728  522370 type.go:168] "Request Body" body=""
	I1206 10:30:34.930826  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:34.931100  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:35.430769  522370 type.go:168] "Request Body" body=""
	I1206 10:30:35.430844  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:35.431198  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:35.930924  522370 type.go:168] "Request Body" body=""
	I1206 10:30:35.931010  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:35.931368  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:36.431048  522370 type.go:168] "Request Body" body=""
	I1206 10:30:36.431167  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:36.431482  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:36.431535  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:36.931272  522370 type.go:168] "Request Body" body=""
	I1206 10:30:36.931345  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:36.931668  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:37.431458  522370 type.go:168] "Request Body" body=""
	I1206 10:30:37.431553  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:37.431867  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:37.931612  522370 type.go:168] "Request Body" body=""
	I1206 10:30:37.931682  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:37.932028  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:38.430754  522370 type.go:168] "Request Body" body=""
	I1206 10:30:38.430831  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:38.431203  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:38.930759  522370 type.go:168] "Request Body" body=""
	I1206 10:30:38.930834  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:38.931173  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:38.931244  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:39.430719  522370 type.go:168] "Request Body" body=""
	I1206 10:30:39.430798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:39.431104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:39.930841  522370 type.go:168] "Request Body" body=""
	I1206 10:30:39.930938  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:39.931315  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:40.431028  522370 type.go:168] "Request Body" body=""
	I1206 10:30:40.431104  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:40.431481  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:40.931230  522370 type.go:168] "Request Body" body=""
	I1206 10:30:40.931298  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:40.931552  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:40.931592  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:41.431348  522370 type.go:168] "Request Body" body=""
	I1206 10:30:41.431446  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:41.431767  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:41.931566  522370 type.go:168] "Request Body" body=""
	I1206 10:30:41.931647  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:41.931976  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:42.431636  522370 type.go:168] "Request Body" body=""
	I1206 10:30:42.431716  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:42.431988  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:42.930987  522370 type.go:168] "Request Body" body=""
	I1206 10:30:42.931066  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:42.931431  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:43.431209  522370 type.go:168] "Request Body" body=""
	I1206 10:30:43.431287  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:43.431648  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:43.431703  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:43.931389  522370 type.go:168] "Request Body" body=""
	I1206 10:30:43.931457  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:43.931727  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:44.431509  522370 type.go:168] "Request Body" body=""
	I1206 10:30:44.431583  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:44.431898  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:44.930652  522370 type.go:168] "Request Body" body=""
	I1206 10:30:44.930726  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:44.931043  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:45.430750  522370 type.go:168] "Request Body" body=""
	I1206 10:30:45.430832  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:45.431185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:45.930735  522370 type.go:168] "Request Body" body=""
	I1206 10:30:45.930816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:45.931167  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:45.931245  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:46.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:30:46.430992  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:46.431364  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:46.930735  522370 type.go:168] "Request Body" body=""
	I1206 10:30:46.930830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:46.931154  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:47.430792  522370 type.go:168] "Request Body" body=""
	I1206 10:30:47.430873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:47.431273  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:47.931290  522370 type.go:168] "Request Body" body=""
	I1206 10:30:47.931389  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:47.931707  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:47.931764  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:48.431531  522370 type.go:168] "Request Body" body=""
	I1206 10:30:48.431600  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:48.431884  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:48.931635  522370 type.go:168] "Request Body" body=""
	I1206 10:30:48.931707  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:48.932051  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:49.430636  522370 type.go:168] "Request Body" body=""
	I1206 10:30:49.430720  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:49.431043  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:49.930721  522370 type.go:168] "Request Body" body=""
	I1206 10:30:49.930793  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:49.931074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:50.430687  522370 type.go:168] "Request Body" body=""
	I1206 10:30:50.430783  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:50.431076  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:50.431162  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:50.930764  522370 type.go:168] "Request Body" body=""
	I1206 10:30:50.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:50.931221  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.430755  522370 type.go:168] "Request Body" body=""
	I1206 10:30:51.430826  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:51.431099  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.930829  522370 type.go:168] "Request Body" body=""
	I1206 10:30:51.930912  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:51.931261  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:52.430981  522370 type.go:168] "Request Body" body=""
	I1206 10:30:52.431081  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:52.431382  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:52.431432  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:52.931312  522370 type.go:168] "Request Body" body=""
	I1206 10:30:52.931405  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:52.931664  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.430694  522370 type.go:168] "Request Body" body=""
	I1206 10:30:53.430779  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:53.431113  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.930852  522370 type.go:168] "Request Body" body=""
	I1206 10:30:53.930925  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:53.931259  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:54.430827  522370 type.go:168] "Request Body" body=""
	I1206 10:30:54.430913  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:54.431229  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:54.930765  522370 type.go:168] "Request Body" body=""
	I1206 10:30:54.930847  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:54.931199  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:54.931254  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:55.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:30:55.431006  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:55.431312  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:55.930988  522370 type.go:168] "Request Body" body=""
	I1206 10:30:55.931078  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:55.931370  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:56.430800  522370 type.go:168] "Request Body" body=""
	I1206 10:30:56.430873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:56.431230  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:56.930928  522370 type.go:168] "Request Body" body=""
	I1206 10:30:56.931021  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:56.931336  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:56.931382  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:57.430743  522370 type.go:168] "Request Body" body=""
	I1206 10:30:57.430812  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:57.431182  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:57.823985  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:57.887311  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:57.891368  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:57.891481  522370 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:30:57.930973  522370 type.go:168] "Request Body" body=""
	I1206 10:30:57.931045  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:57.931345  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:58.430767  522370 type.go:168] "Request Body" body=""
	I1206 10:30:58.430847  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:58.431185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:58.930711  522370 type.go:168] "Request Body" body=""
	I1206 10:30:58.930784  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:58.931072  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:59.430808  522370 type.go:168] "Request Body" body=""
	I1206 10:30:59.430894  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:59.431255  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:59.431320  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:59.930807  522370 type.go:168] "Request Body" body=""
	I1206 10:30:59.930882  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:59.931248  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:00.430992  522370 type.go:168] "Request Body" body=""
	I1206 10:31:00.431085  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:00.431404  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:00.930779  522370 type.go:168] "Request Body" body=""
	I1206 10:31:00.930858  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:00.931174  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:00.997513  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:01.064863  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:01.068488  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:01.068586  522370 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:31:01.073496  522370 out.go:179] * Enabled addons: 
	I1206 10:31:01.076263  522370 addons.go:530] duration metric: took 1m33.142805076s for enable addons: enabled=[]
	I1206 10:31:01.430965  522370 type.go:168] "Request Body" body=""
	I1206 10:31:01.431062  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:01.431429  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:01.431491  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:01.930728  522370 type.go:168] "Request Body" body=""
	I1206 10:31:01.930813  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:01.931075  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:02.430719  522370 type.go:168] "Request Body" body=""
	I1206 10:31:02.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:02.431170  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:02.931199  522370 type.go:168] "Request Body" body=""
	I1206 10:31:02.931311  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:02.931626  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:03.431408  522370 type.go:168] "Request Body" body=""
	I1206 10:31:03.431503  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:03.431775  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:03.431826  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:03.931635  522370 type.go:168] "Request Body" body=""
	I1206 10:31:03.931714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:03.932077  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:04.430812  522370 type.go:168] "Request Body" body=""
	I1206 10:31:04.430889  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:04.431222  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:04.930928  522370 type.go:168] "Request Body" body=""
	I1206 10:31:04.931001  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:04.931294  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:05.430732  522370 type.go:168] "Request Body" body=""
	I1206 10:31:05.430807  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:05.431205  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:05.930777  522370 type.go:168] "Request Body" body=""
	I1206 10:31:05.930859  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:05.931245  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:05.931317  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:06.430961  522370 type.go:168] "Request Body" body=""
	I1206 10:31:06.431031  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:06.431335  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:06.930771  522370 type.go:168] "Request Body" body=""
	I1206 10:31:06.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:06.931212  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:07.430742  522370 type.go:168] "Request Body" body=""
	I1206 10:31:07.430822  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:07.431109  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:07.931277  522370 type.go:168] "Request Body" body=""
	I1206 10:31:07.931353  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:07.931638  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:07.931679  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:08.431521  522370 type.go:168] "Request Body" body=""
	I1206 10:31:08.431597  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:08.431952  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:08.930701  522370 type.go:168] "Request Body" body=""
	I1206 10:31:08.930775  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:08.931170  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:09.430860  522370 type.go:168] "Request Body" body=""
	I1206 10:31:09.430943  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:09.431243  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:09.930955  522370 type.go:168] "Request Body" body=""
	I1206 10:31:09.931035  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:09.931420  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:10.430778  522370 type.go:168] "Request Body" body=""
	I1206 10:31:10.430853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:10.431208  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:10.431264  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:10.930903  522370 type.go:168] "Request Body" body=""
	I1206 10:31:10.930972  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:10.931257  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:11.430753  522370 type.go:168] "Request Body" body=""
	I1206 10:31:11.430831  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:11.431176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:11.930888  522370 type.go:168] "Request Body" body=""
	I1206 10:31:11.930965  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:11.931366  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:12.431043  522370 type.go:168] "Request Body" body=""
	I1206 10:31:12.431118  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:12.431399  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:12.431444  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:12.931357  522370 type.go:168] "Request Body" body=""
	I1206 10:31:12.931433  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:12.931800  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:13.431602  522370 type.go:168] "Request Body" body=""
	I1206 10:31:13.431680  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:13.432016  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:13.930772  522370 type.go:168] "Request Body" body=""
	I1206 10:31:13.930841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:13.931103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.430796  522370 type.go:168] "Request Body" body=""
	I1206 10:31:14.430893  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:14.431217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.930770  522370 type.go:168] "Request Body" body=""
	I1206 10:31:14.930849  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:14.931219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:14.931279  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:15.430782  522370 type.go:168] "Request Body" body=""
	I1206 10:31:15.430850  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:15.431157  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:15.930757  522370 type.go:168] "Request Body" body=""
	I1206 10:31:15.930829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:15.931193  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:16.430753  522370 type.go:168] "Request Body" body=""
	I1206 10:31:16.430830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:16.431177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:16.930737  522370 type.go:168] "Request Body" body=""
	I1206 10:31:16.930808  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:16.931093  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:17.430742  522370 type.go:168] "Request Body" body=""
	I1206 10:31:17.430824  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:17.431217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:17.431272  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:17.931342  522370 type.go:168] "Request Body" body=""
	I1206 10:31:17.931425  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:17.931778  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:18.431537  522370 type.go:168] "Request Body" body=""
	I1206 10:31:18.431605  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:18.431868  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:18.930645  522370 type.go:168] "Request Body" body=""
	I1206 10:31:18.930720  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:18.931093  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:19.430810  522370 type.go:168] "Request Body" body=""
	I1206 10:31:19.430884  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:19.431254  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:19.431307  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:19.930711  522370 type.go:168] "Request Body" body=""
	I1206 10:31:19.930781  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:19.931116  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:20.430790  522370 type.go:168] "Request Body" body=""
	I1206 10:31:20.430893  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:20.431290  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:20.931043  522370 type.go:168] "Request Body" body=""
	I1206 10:31:20.931148  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:20.931503  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:21.431268  522370 type.go:168] "Request Body" body=""
	I1206 10:31:21.431356  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:21.431682  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:21.431723  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:21.931490  522370 type.go:168] "Request Body" body=""
	I1206 10:31:21.931570  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:21.931895  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:22.431704  522370 type.go:168] "Request Body" body=""
	I1206 10:31:22.431783  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:22.432137  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:22.930934  522370 type.go:168] "Request Body" body=""
	I1206 10:31:22.931013  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:22.931330  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:23.430728  522370 type.go:168] "Request Body" body=""
	I1206 10:31:23.430800  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:23.431163  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:23.930907  522370 type.go:168] "Request Body" body=""
	I1206 10:31:23.931011  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:23.931347  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:23.931408  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:24.430723  522370 type.go:168] "Request Body" body=""
	I1206 10:31:24.430793  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:24.431100  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:24.930781  522370 type.go:168] "Request Body" body=""
	I1206 10:31:24.930881  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:24.931205  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:25.430719  522370 type.go:168] "Request Body" body=""
	I1206 10:31:25.430793  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:25.431146  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:25.930743  522370 type.go:168] "Request Body" body=""
	I1206 10:31:25.930825  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:25.931098  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:26.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:31:26.430853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:26.431230  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:26.431285  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:26.930800  522370 type.go:168] "Request Body" body=""
	I1206 10:31:26.930898  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:26.931198  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:27.431688  522370 type.go:168] "Request Body" body=""
	I1206 10:31:27.431783  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:27.432074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:27.931195  522370 type.go:168] "Request Body" body=""
	I1206 10:31:27.931291  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:27.931692  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:28.431526  522370 type.go:168] "Request Body" body=""
	I1206 10:31:28.431657  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:28.432017  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:28.432087  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:28.930685  522370 type.go:168] "Request Body" body=""
	I1206 10:31:28.930798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:28.931176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:29.430715  522370 type.go:168] "Request Body" body=""
	I1206 10:31:29.430787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:29.431113  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:29.930720  522370 type.go:168] "Request Body" body=""
	I1206 10:31:29.930795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:29.931147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:30.430735  522370 type.go:168] "Request Body" body=""
	I1206 10:31:30.430809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:30.431203  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:30.930763  522370 type.go:168] "Request Body" body=""
	I1206 10:31:30.930838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:30.931220  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:30.931276  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:31.430923  522370 type.go:168] "Request Body" body=""
	I1206 10:31:31.430999  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:31.431356  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:31.931034  522370 type.go:168] "Request Body" body=""
	I1206 10:31:31.931102  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:31.931394  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:32.430894  522370 type.go:168] "Request Body" body=""
	I1206 10:31:32.430974  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:32.431350  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:32.931206  522370 type.go:168] "Request Body" body=""
	I1206 10:31:32.931296  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:32.931626  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:32.931683  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:33.431202  522370 type.go:168] "Request Body" body=""
	I1206 10:31:33.431271  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:33.431607  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:33.931401  522370 type.go:168] "Request Body" body=""
	I1206 10:31:33.931476  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:33.931817  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:34.431625  522370 type.go:168] "Request Body" body=""
	I1206 10:31:34.431714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:34.432035  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:34.931669  522370 type.go:168] "Request Body" body=""
	I1206 10:31:34.931742  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:34.932009  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:34.932053  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:35.430771  522370 type.go:168] "Request Body" body=""
	I1206 10:31:35.430852  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:35.431237  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:35.930935  522370 type.go:168] "Request Body" body=""
	I1206 10:31:35.931012  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:35.931347  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:36.430721  522370 type.go:168] "Request Body" body=""
	I1206 10:31:36.430797  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:36.431104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:36.930741  522370 type.go:168] "Request Body" body=""
	I1206 10:31:36.930820  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:36.931208  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:37.430713  522370 type.go:168] "Request Body" body=""
	I1206 10:31:37.430790  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:37.431167  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:37.431222  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:37.931252  522370 type.go:168] "Request Body" body=""
	I1206 10:31:37.931330  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:37.931655  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:38.431472  522370 type.go:168] "Request Body" body=""
	I1206 10:31:38.431546  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:38.431863  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:38.930659  522370 type.go:168] "Request Body" body=""
	I1206 10:31:38.930734  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:38.931062  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:39.430764  522370 type.go:168] "Request Body" body=""
	I1206 10:31:39.430838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:39.431171  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:39.930872  522370 type.go:168] "Request Body" body=""
	I1206 10:31:39.931015  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:39.931393  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:39.931453  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:40.431186  522370 type.go:168] "Request Body" body=""
	I1206 10:31:40.431263  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:40.431606  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:40.931379  522370 type.go:168] "Request Body" body=""
	I1206 10:31:40.931446  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:40.931701  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:41.431485  522370 type.go:168] "Request Body" body=""
	I1206 10:31:41.431564  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:41.431887  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:41.930643  522370 type.go:168] "Request Body" body=""
	I1206 10:31:41.930718  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:41.931057  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:42.430753  522370 type.go:168] "Request Body" body=""
	I1206 10:31:42.430823  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:42.431171  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:42.431219  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:42.931185  522370 type.go:168] "Request Body" body=""
	I1206 10:31:42.931265  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:42.931600  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:43.431298  522370 type.go:168] "Request Body" body=""
	I1206 10:31:43.431370  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:43.431690  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:43.931472  522370 type.go:168] "Request Body" body=""
	I1206 10:31:43.931550  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:43.931859  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:44.431577  522370 type.go:168] "Request Body" body=""
	I1206 10:31:44.431700  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:44.432084  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:44.432138  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:44.930770  522370 type.go:168] "Request Body" body=""
	I1206 10:31:44.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:44.931206  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:45.430733  522370 type.go:168] "Request Body" body=""
	I1206 10:31:45.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:45.431161  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:45.930853  522370 type.go:168] "Request Body" body=""
	I1206 10:31:45.930932  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:45.931318  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:46.430766  522370 type.go:168] "Request Body" body=""
	I1206 10:31:46.430845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:46.431204  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:46.930747  522370 type.go:168] "Request Body" body=""
	I1206 10:31:46.930820  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:46.931099  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:46.931170  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:47.430770  522370 type.go:168] "Request Body" body=""
	I1206 10:31:47.430858  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:47.431194  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:47.931329  522370 type.go:168] "Request Body" body=""
	I1206 10:31:47.931412  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:47.931751  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:48.431557  522370 type.go:168] "Request Body" body=""
	I1206 10:31:48.431630  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:48.431921  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:48.930683  522370 type.go:168] "Request Body" body=""
	I1206 10:31:48.930756  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:48.931083  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:49.430810  522370 type.go:168] "Request Body" body=""
	I1206 10:31:49.430898  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:49.431254  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:49.431313  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:49.930720  522370 type.go:168] "Request Body" body=""
	I1206 10:31:49.930793  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:49.931110  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:50.430770  522370 type.go:168] "Request Body" body=""
	I1206 10:31:50.430874  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:50.431234  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:50.931041  522370 type.go:168] "Request Body" body=""
	I1206 10:31:50.931153  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:50.931493  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:51.431234  522370 type.go:168] "Request Body" body=""
	I1206 10:31:51.431312  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:51.431631  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:51.431691  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:51.931489  522370 type.go:168] "Request Body" body=""
	I1206 10:31:51.931580  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:51.931981  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:52.430704  522370 type.go:168] "Request Body" body=""
	I1206 10:31:52.430806  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:52.431144  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:52.930913  522370 type.go:168] "Request Body" body=""
	I1206 10:31:52.930987  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:52.931309  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:53.430741  522370 type.go:168] "Request Body" body=""
	I1206 10:31:53.430813  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:53.431186  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:53.930898  522370 type.go:168] "Request Body" body=""
	I1206 10:31:53.930988  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:53.931350  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:53.931408  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:54.431065  522370 type.go:168] "Request Body" body=""
	I1206 10:31:54.431152  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:54.431403  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:54.931103  522370 type.go:168] "Request Body" body=""
	I1206 10:31:54.931201  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:54.931542  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.431350  522370 type.go:168] "Request Body" body=""
	I1206 10:31:55.431428  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:55.431748  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.931464  522370 type.go:168] "Request Body" body=""
	I1206 10:31:55.931536  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:55.931792  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:55.931832  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:56.431629  522370 type.go:168] "Request Body" body=""
	I1206 10:31:56.431704  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:56.432065  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:56.930782  522370 type.go:168] "Request Body" body=""
	I1206 10:31:56.930863  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:56.931219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:57.430905  522370 type.go:168] "Request Body" body=""
	I1206 10:31:57.430978  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:57.431276  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:57.931572  522370 type.go:168] "Request Body" body=""
	I1206 10:31:57.931656  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:57.931998  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:57.932052  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:58.430762  522370 type.go:168] "Request Body" body=""
	I1206 10:31:58.430841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:58.431216  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:58.930737  522370 type.go:168] "Request Body" body=""
	I1206 10:31:58.930807  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:58.931055  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:59.430703  522370 type.go:168] "Request Body" body=""
	I1206 10:31:59.430788  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:59.431185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:59.930748  522370 type.go:168] "Request Body" body=""
	I1206 10:31:59.930832  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:59.931193  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:00.430923  522370 type.go:168] "Request Body" body=""
	I1206 10:32:00.431018  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:00.431383  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:00.431435  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:00.930749  522370 type.go:168] "Request Body" body=""
	I1206 10:32:00.930823  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:00.931167  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:01.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:32:01.430987  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:01.431290  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:01.930735  522370 type.go:168] "Request Body" body=""
	I1206 10:32:01.930846  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:01.931177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:02.430793  522370 type.go:168] "Request Body" body=""
	I1206 10:32:02.430870  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:02.431209  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:02.931198  522370 type.go:168] "Request Body" body=""
	I1206 10:32:02.931274  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:02.931612  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:02.931666  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:03.431269  522370 type.go:168] "Request Body" body=""
	I1206 10:32:03.431341  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:03.431598  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:03.931409  522370 type.go:168] "Request Body" body=""
	I1206 10:32:03.931493  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:03.931843  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:04.431512  522370 type.go:168] "Request Body" body=""
	I1206 10:32:04.431588  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:04.431937  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:04.930649  522370 type.go:168] "Request Body" body=""
	I1206 10:32:04.930727  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:04.930996  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:05.430715  522370 type.go:168] "Request Body" body=""
	I1206 10:32:05.430789  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:05.431147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:05.431201  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:05.930878  522370 type.go:168] "Request Body" body=""
	I1206 10:32:05.930961  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:05.931320  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:06.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:32:06.430798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:06.431112  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:06.930766  522370 type.go:168] "Request Body" body=""
	I1206 10:32:06.930839  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:06.931201  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:07.430760  522370 type.go:168] "Request Body" body=""
	I1206 10:32:07.430842  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:07.431197  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:07.431255  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:07.931421  522370 type.go:168] "Request Body" body=""
	I1206 10:32:07.931493  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:07.931819  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:08.431684  522370 type.go:168] "Request Body" body=""
	I1206 10:32:08.431770  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:08.432111  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:08.930828  522370 type.go:168] "Request Body" body=""
	I1206 10:32:08.930926  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:08.931327  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:09.430732  522370 type.go:168] "Request Body" body=""
	I1206 10:32:09.430804  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:09.431070  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:09.930755  522370 type.go:168] "Request Body" body=""
	I1206 10:32:09.930836  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:09.931203  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:09.931266  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:10.430768  522370 type.go:168] "Request Body" body=""
	I1206 10:32:10.430879  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:10.431217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:10.930889  522370 type.go:168] "Request Body" body=""
	I1206 10:32:10.930960  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:10.931259  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:11.430792  522370 type.go:168] "Request Body" body=""
	I1206 10:32:11.430872  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:11.431253  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:11.930964  522370 type.go:168] "Request Body" body=""
	I1206 10:32:11.931039  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:11.931369  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:11.931419  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:12.430849  522370 type.go:168] "Request Body" body=""
	I1206 10:32:12.430927  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:12.431323  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:12.931326  522370 type.go:168] "Request Body" body=""
	I1206 10:32:12.931399  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:12.931728  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:13.431494  522370 type.go:168] "Request Body" body=""
	I1206 10:32:13.431575  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:13.431906  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:13.931683  522370 type.go:168] "Request Body" body=""
	I1206 10:32:13.931761  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:13.932130  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:13.932175  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:14.430772  522370 type.go:168] "Request Body" body=""
	I1206 10:32:14.430846  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:14.431201  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:14.930793  522370 type.go:168] "Request Body" body=""
	I1206 10:32:14.930874  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:14.931260  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:15.430763  522370 type.go:168] "Request Body" body=""
	I1206 10:32:15.430896  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:15.431300  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:15.930804  522370 type.go:168] "Request Body" body=""
	I1206 10:32:15.930877  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:15.931264  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:16.431005  522370 type.go:168] "Request Body" body=""
	I1206 10:32:16.431079  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:16.431470  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:16.431521  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:16.931228  522370 type.go:168] "Request Body" body=""
	I1206 10:32:16.931295  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:16.931553  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:17.431420  522370 type.go:168] "Request Body" body=""
	I1206 10:32:17.431494  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:17.431814  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:17.930648  522370 type.go:168] "Request Body" body=""
	I1206 10:32:17.930727  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:17.931063  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:18.430755  522370 type.go:168] "Request Body" body=""
	I1206 10:32:18.430879  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:18.431264  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:18.930777  522370 type.go:168] "Request Body" body=""
	I1206 10:32:18.930861  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:18.931245  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:18.931300  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:19.430824  522370 type.go:168] "Request Body" body=""
	I1206 10:32:19.430907  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:19.431295  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:19.930976  522370 type.go:168] "Request Body" body=""
	I1206 10:32:19.931045  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:19.931324  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:20.431030  522370 type.go:168] "Request Body" body=""
	I1206 10:32:20.431116  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:20.431515  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:20.931305  522370 type.go:168] "Request Body" body=""
	I1206 10:32:20.931378  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:20.931713  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:20.931766  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:21.431535  522370 type.go:168] "Request Body" body=""
	I1206 10:32:21.431650  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:21.431909  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:21.931668  522370 type.go:168] "Request Body" body=""
	I1206 10:32:21.931751  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:21.932103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:22.430721  522370 type.go:168] "Request Body" body=""
	I1206 10:32:22.430809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:22.431161  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:22.931142  522370 type.go:168] "Request Body" body=""
	I1206 10:32:22.931211  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:22.931472  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:23.431308  522370 type.go:168] "Request Body" body=""
	I1206 10:32:23.431380  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:23.431717  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:23.431770  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:23.931606  522370 type.go:168] "Request Body" body=""
	I1206 10:32:23.931684  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:23.932028  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:24.430722  522370 type.go:168] "Request Body" body=""
	I1206 10:32:24.430852  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:24.431235  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:24.930776  522370 type.go:168] "Request Body" body=""
	I1206 10:32:24.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:24.931200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:25.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:32:25.430855  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:25.431194  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:25.930892  522370 type.go:168] "Request Body" body=""
	I1206 10:32:25.930959  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:25.931238  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:25.931278  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:26.430787  522370 type.go:168] "Request Body" body=""
	I1206 10:32:26.430873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:26.431231  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:26.930954  522370 type.go:168] "Request Body" body=""
	I1206 10:32:26.931033  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:26.931398  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:27.431111  522370 type.go:168] "Request Body" body=""
	I1206 10:32:27.431201  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:27.431504  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:27.931658  522370 type.go:168] "Request Body" body=""
	I1206 10:32:27.931732  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:27.932069  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:27.932132  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:28.430761  522370 type.go:168] "Request Body" body=""
	I1206 10:32:28.430838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:28.431176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:28.930711  522370 type.go:168] "Request Body" body=""
	I1206 10:32:28.930783  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:28.931095  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:29.430760  522370 type.go:168] "Request Body" body=""
	I1206 10:32:29.430841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:29.431191  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:29.930785  522370 type.go:168] "Request Body" body=""
	I1206 10:32:29.930863  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:29.931232  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:30.430912  522370 type.go:168] "Request Body" body=""
	I1206 10:32:30.430988  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:30.431300  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:30.431356  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:30.930756  522370 type.go:168] "Request Body" body=""
	I1206 10:32:30.930830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:30.931179  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:31.430752  522370 type.go:168] "Request Body" body=""
	I1206 10:32:31.430836  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:31.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:31.930917  522370 type.go:168] "Request Body" body=""
	I1206 10:32:31.930986  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:31.931271  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:32.430794  522370 type.go:168] "Request Body" body=""
	I1206 10:32:32.430881  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:32.431249  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:32.931263  522370 type.go:168] "Request Body" body=""
	I1206 10:32:32.931386  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:32.931723  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:32.931782  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:33.431496  522370 type.go:168] "Request Body" body=""
	I1206 10:32:33.431581  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:33.431932  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:33.931657  522370 type.go:168] "Request Body" body=""
	I1206 10:32:33.931736  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:33.932091  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:34.430713  522370 type.go:168] "Request Body" body=""
	I1206 10:32:34.430790  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:34.431152  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:34.930699  522370 type.go:168] "Request Body" body=""
	I1206 10:32:34.930768  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:34.931073  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:35.430758  522370 type.go:168] "Request Body" body=""
	I1206 10:32:35.430837  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:35.431193  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:35.431247  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:35.930738  522370 type.go:168] "Request Body" body=""
	I1206 10:32:35.930816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:35.931165  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:36.430720  522370 type.go:168] "Request Body" body=""
	I1206 10:32:36.430791  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:36.431113  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:36.930734  522370 type.go:168] "Request Body" body=""
	I1206 10:32:36.930816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:36.931108  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:37.430776  522370 type.go:168] "Request Body" body=""
	I1206 10:32:37.430857  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:37.431190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:37.931387  522370 type.go:168] "Request Body" body=""
	I1206 10:32:37.931455  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:37.931795  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:37.931855  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:38.431623  522370 type.go:168] "Request Body" body=""
	I1206 10:32:38.431705  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:38.432052  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:38.930772  522370 type.go:168] "Request Body" body=""
	I1206 10:32:38.930850  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:38.931196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:39.430728  522370 type.go:168] "Request Body" body=""
	I1206 10:32:39.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:39.431143  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:39.930742  522370 type.go:168] "Request Body" body=""
	I1206 10:32:39.930822  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:39.931187  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:40.430754  522370 type.go:168] "Request Body" body=""
	I1206 10:32:40.430831  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:40.431262  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:40.431317  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:40.930972  522370 type.go:168] "Request Body" body=""
	I1206 10:32:40.931048  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:40.931346  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:41.430760  522370 type.go:168] "Request Body" body=""
	I1206 10:32:41.430833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:41.431192  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:41.930757  522370 type.go:168] "Request Body" body=""
	I1206 10:32:41.930829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:41.931180  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:42.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:32:42.430816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:42.431140  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:42.931171  522370 type.go:168] "Request Body" body=""
	I1206 10:32:42.931246  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:42.931610  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:42.931666  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:43.431315  522370 type.go:168] "Request Body" body=""
	I1206 10:32:43.431391  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:43.431734  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:43.931465  522370 type.go:168] "Request Body" body=""
	I1206 10:32:43.931536  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:43.931803  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:44.431545  522370 type.go:168] "Request Body" body=""
	I1206 10:32:44.431622  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:44.431960  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:44.931639  522370 type.go:168] "Request Body" body=""
	I1206 10:32:44.931734  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:44.932055  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:44.932114  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:45.430772  522370 type.go:168] "Request Body" body=""
	I1206 10:32:45.430845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:45.431116  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:45.930776  522370 type.go:168] "Request Body" body=""
	I1206 10:32:45.930868  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:45.931291  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:46.430766  522370 type.go:168] "Request Body" body=""
	I1206 10:32:46.430841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:46.431191  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:46.930863  522370 type.go:168] "Request Body" body=""
	I1206 10:32:46.930930  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:46.931212  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:47.430796  522370 type.go:168] "Request Body" body=""
	I1206 10:32:47.430887  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:47.431295  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:47.431359  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:47.931474  522370 type.go:168] "Request Body" body=""
	I1206 10:32:47.931560  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:47.931907  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:48.431677  522370 type.go:168] "Request Body" body=""
	I1206 10:32:48.431748  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:48.432085  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:48.930788  522370 type.go:168] "Request Body" body=""
	I1206 10:32:48.930871  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:48.931291  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:49.430869  522370 type.go:168] "Request Body" body=""
	I1206 10:32:49.430950  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:49.431291  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:49.930725  522370 type.go:168] "Request Body" body=""
	I1206 10:32:49.930794  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:49.931082  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:49.931153  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:50.430853  522370 type.go:168] "Request Body" body=""
	I1206 10:32:50.430949  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:50.431284  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:50.930721  522370 type.go:168] "Request Body" body=""
	I1206 10:32:50.930802  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:50.931146  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:51.430715  522370 type.go:168] "Request Body" body=""
	I1206 10:32:51.430795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:51.431104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:51.930818  522370 type.go:168] "Request Body" body=""
	I1206 10:32:51.930895  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:51.931285  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:51.931368  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:52.431078  522370 type.go:168] "Request Body" body=""
	I1206 10:32:52.431180  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:52.431482  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:52.931409  522370 type.go:168] "Request Body" body=""
	I1206 10:32:52.931482  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:52.931752  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:53.431547  522370 type.go:168] "Request Body" body=""
	I1206 10:32:53.431624  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:53.431945  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:53.930683  522370 type.go:168] "Request Body" body=""
	I1206 10:32:53.930759  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:53.931085  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:54.430729  522370 type.go:168] "Request Body" body=""
	I1206 10:32:54.430803  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:54.431094  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:54.431169  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:54.930721  522370 type.go:168] "Request Body" body=""
	I1206 10:32:54.930796  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:54.931156  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:55.430745  522370 type.go:168] "Request Body" body=""
	I1206 10:32:55.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:55.431164  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:55.930849  522370 type.go:168] "Request Body" body=""
	I1206 10:32:55.930915  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:55.931210  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:56.430891  522370 type.go:168] "Request Body" body=""
	I1206 10:32:56.430970  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:56.431338  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:56.431397  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:56.930910  522370 type.go:168] "Request Body" body=""
	I1206 10:32:56.930994  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:56.931313  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:57.430984  522370 type.go:168] "Request Body" body=""
	I1206 10:32:57.431057  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:57.431352  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:57.931626  522370 type.go:168] "Request Body" body=""
	I1206 10:32:57.931699  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:57.932050  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:58.430670  522370 type.go:168] "Request Body" body=""
	I1206 10:32:58.430747  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:58.431102  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:58.930730  522370 type.go:168] "Request Body" body=""
	I1206 10:32:58.930798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:58.931062  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:58.931101  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:59.430796  522370 type.go:168] "Request Body" body=""
	I1206 10:32:59.430871  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:59.431207  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:59.930920  522370 type.go:168] "Request Body" body=""
	I1206 10:32:59.930996  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:59.931373  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:00.431073  522370 type.go:168] "Request Body" body=""
	I1206 10:33:00.431174  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:00.431454  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:00.931159  522370 type.go:168] "Request Body" body=""
	I1206 10:33:00.931233  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:00.931593  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:00.931646  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:01.431429  522370 type.go:168] "Request Body" body=""
	I1206 10:33:01.431506  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:01.431854  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:01.931651  522370 type.go:168] "Request Body" body=""
	I1206 10:33:01.931722  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:01.932003  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:02.430670  522370 type.go:168] "Request Body" body=""
	I1206 10:33:02.430745  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:02.431108  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:02.930915  522370 type.go:168] "Request Body" body=""
	I1206 10:33:02.930990  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:02.931336  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:03.431009  522370 type.go:168] "Request Body" body=""
	I1206 10:33:03.431081  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:03.431417  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:03.431470  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:03.930755  522370 type.go:168] "Request Body" body=""
	I1206 10:33:03.930829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:03.931200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:04.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:33:04.430822  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:04.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:04.930891  522370 type.go:168] "Request Body" body=""
	I1206 10:33:04.930967  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:04.931354  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:05.430778  522370 type.go:168] "Request Body" body=""
	I1206 10:33:05.430860  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:05.431250  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:05.930755  522370 type.go:168] "Request Body" body=""
	I1206 10:33:05.930835  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:05.931189  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:05.931249  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:06.430896  522370 type.go:168] "Request Body" body=""
	I1206 10:33:06.430973  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:06.431278  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:06.930733  522370 type.go:168] "Request Body" body=""
	I1206 10:33:06.930807  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:06.931165  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:07.430744  522370 type.go:168] "Request Body" body=""
	I1206 10:33:07.430825  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:07.431177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:07.931223  522370 type.go:168] "Request Body" body=""
	I1206 10:33:07.931292  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:07.931564  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:07.931604  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:08.431432  522370 type.go:168] "Request Body" body=""
	I1206 10:33:08.431521  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:08.431859  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:08.931644  522370 type.go:168] "Request Body" body=""
	I1206 10:33:08.931724  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:08.932093  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:09.430763  522370 type.go:168] "Request Body" body=""
	I1206 10:33:09.430862  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:09.431255  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:09.930767  522370 type.go:168] "Request Body" body=""
	I1206 10:33:09.930849  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:09.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:10.430945  522370 type.go:168] "Request Body" body=""
	I1206 10:33:10.431022  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:10.431384  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:10.431441  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:10.931100  522370 type.go:168] "Request Body" body=""
	I1206 10:33:10.931186  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:10.931443  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:11.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:33:11.430818  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:11.431167  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:11.930886  522370 type.go:168] "Request Body" body=""
	I1206 10:33:11.930967  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:11.931341  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:12.431022  522370 type.go:168] "Request Body" body=""
	I1206 10:33:12.431093  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:12.431430  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:12.431487  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:12.931422  522370 type.go:168] "Request Body" body=""
	I1206 10:33:12.931498  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:12.931813  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:13.431634  522370 type.go:168] "Request Body" body=""
	I1206 10:33:13.431707  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:13.432041  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:13.930727  522370 type.go:168] "Request Body" body=""
	I1206 10:33:13.930806  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:13.931116  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:14.430764  522370 type.go:168] "Request Body" body=""
	I1206 10:33:14.430843  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:14.431197  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:14.930912  522370 type.go:168] "Request Body" body=""
	I1206 10:33:14.930993  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:14.931381  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:14.931437  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:15.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:33:15.430795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:15.431103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:15.930758  522370 type.go:168] "Request Body" body=""
	I1206 10:33:15.930830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:15.931180  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:16.430880  522370 type.go:168] "Request Body" body=""
	I1206 10:33:16.430966  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:16.431327  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:16.930724  522370 type.go:168] "Request Body" body=""
	I1206 10:33:16.930789  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:16.931103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:17.430923  522370 type.go:168] "Request Body" body=""
	I1206 10:33:17.430996  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:17.431378  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:17.431433  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:17.931311  522370 type.go:168] "Request Body" body=""
	I1206 10:33:17.931390  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:17.931703  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:18.431499  522370 type.go:168] "Request Body" body=""
	I1206 10:33:18.431573  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:18.431859  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:18.931659  522370 type.go:168] "Request Body" body=""
	I1206 10:33:18.931728  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:18.932101  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:19.430669  522370 type.go:168] "Request Body" body=""
	I1206 10:33:19.430749  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:19.431091  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:19.930819  522370 type.go:168] "Request Body" body=""
	I1206 10:33:19.930896  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:19.931201  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:19.931264  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:20.430727  522370 type.go:168] "Request Body" body=""
	I1206 10:33:20.430804  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:20.431145  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:20.930747  522370 type.go:168] "Request Body" body=""
	I1206 10:33:20.930830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:20.931225  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:21.430895  522370 type.go:168] "Request Body" body=""
	I1206 10:33:21.430968  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:21.431276  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:21.930735  522370 type.go:168] "Request Body" body=""
	I1206 10:33:21.930814  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:21.931153  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:22.430742  522370 type.go:168] "Request Body" body=""
	I1206 10:33:22.430815  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:22.431176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:22.431236  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:22.930959  522370 type.go:168] "Request Body" body=""
	I1206 10:33:22.931032  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:22.931315  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:23.430982  522370 type.go:168] "Request Body" body=""
	I1206 10:33:23.431057  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:23.431412  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:23.931141  522370 type.go:168] "Request Body" body=""
	I1206 10:33:23.931222  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:23.931520  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:24.431230  522370 type.go:168] "Request Body" body=""
	I1206 10:33:24.431303  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:24.431559  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:24.431598  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:24.931419  522370 type.go:168] "Request Body" body=""
	I1206 10:33:24.931497  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:24.931798  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:25.431590  522370 type.go:168] "Request Body" body=""
	I1206 10:33:25.431664  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:25.432003  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:25.930717  522370 type.go:168] "Request Body" body=""
	I1206 10:33:25.930787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:25.931105  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:26.430730  522370 type.go:168] "Request Body" body=""
	I1206 10:33:26.430803  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:26.431170  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:26.930777  522370 type.go:168] "Request Body" body=""
	I1206 10:33:26.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:26.931184  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:26.931237  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:27.430726  522370 type.go:168] "Request Body" body=""
	I1206 10:33:27.430818  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:27.431145  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:27.931181  522370 type.go:168] "Request Body" body=""
	I1206 10:33:27.931266  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:27.931566  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:28.431438  522370 type.go:168] "Request Body" body=""
	I1206 10:33:28.431510  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:28.431869  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:28.931537  522370 type.go:168] "Request Body" body=""
	I1206 10:33:28.931618  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:28.931903  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:28.931960  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:29.430674  522370 type.go:168] "Request Body" body=""
	I1206 10:33:29.430755  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:29.431137  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:29.930914  522370 type.go:168] "Request Body" body=""
	I1206 10:33:29.930990  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:29.931351  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:30.431026  522370 type.go:168] "Request Body" body=""
	I1206 10:33:30.431102  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:30.431376  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:30.930781  522370 type.go:168] "Request Body" body=""
	I1206 10:33:30.930873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:30.931192  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:31.430878  522370 type.go:168] "Request Body" body=""
	I1206 10:33:31.430956  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:31.431307  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:31.431363  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:31.930818  522370 type.go:168] "Request Body" body=""
	I1206 10:33:31.930894  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:31.931174  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:32.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:33:32.430850  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:32.431192  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:32.931213  522370 type.go:168] "Request Body" body=""
	I1206 10:33:32.931287  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:32.931623  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:33.431374  522370 type.go:168] "Request Body" body=""
	I1206 10:33:33.431441  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:33.431690  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:33.431729  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:33.931533  522370 type.go:168] "Request Body" body=""
	I1206 10:33:33.931612  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:33.931952  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:34.430686  522370 type.go:168] "Request Body" body=""
	I1206 10:33:34.430769  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:34.431100  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:34.930721  522370 type.go:168] "Request Body" body=""
	I1206 10:33:34.930796  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:34.931111  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:35.430773  522370 type.go:168] "Request Body" body=""
	I1206 10:33:35.430854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:35.431209  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:35.930752  522370 type.go:168] "Request Body" body=""
	I1206 10:33:35.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:35.931211  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:35.931270  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:36.430716  522370 type.go:168] "Request Body" body=""
	I1206 10:33:36.430789  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:36.431117  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:36.930838  522370 type.go:168] "Request Body" body=""
	I1206 10:33:36.930915  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:36.931278  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:37.430762  522370 type.go:168] "Request Body" body=""
	I1206 10:33:37.430839  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:37.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:37.931227  522370 type.go:168] "Request Body" body=""
	I1206 10:33:37.931308  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:37.931579  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:37.931629  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:38.431327  522370 type.go:168] "Request Body" body=""
	I1206 10:33:38.431398  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:38.431755  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:38.931430  522370 type.go:168] "Request Body" body=""
	I1206 10:33:38.931512  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:38.931837  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:39.431622  522370 type.go:168] "Request Body" body=""
	I1206 10:33:39.431687  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:39.431948  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:39.930714  522370 type.go:168] "Request Body" body=""
	I1206 10:33:39.930788  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:39.931147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:40.430846  522370 type.go:168] "Request Body" body=""
	I1206 10:33:40.430923  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:40.431265  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:40.431320  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:40.930719  522370 type.go:168] "Request Body" body=""
	I1206 10:33:40.930795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:40.931103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:41.430879  522370 type.go:168] "Request Body" body=""
	I1206 10:33:41.430958  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:41.431368  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:41.931083  522370 type.go:168] "Request Body" body=""
	I1206 10:33:41.931178  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:41.931515  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:42.431226  522370 type.go:168] "Request Body" body=""
	I1206 10:33:42.431297  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:42.431581  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:42.431622  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:42.931519  522370 type.go:168] "Request Body" body=""
	I1206 10:33:42.931593  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:42.931924  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:43.431683  522370 type.go:168] "Request Body" body=""
	I1206 10:33:43.431760  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:43.432078  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:43.930714  522370 type.go:168] "Request Body" body=""
	I1206 10:33:43.930784  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:43.931091  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:44.430727  522370 type.go:168] "Request Body" body=""
	I1206 10:33:44.430805  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:44.431177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:44.930741  522370 type.go:168] "Request Body" body=""
	I1206 10:33:44.930820  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:44.931173  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:44.931227  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:45.430724  522370 type.go:168] "Request Body" body=""
	I1206 10:33:45.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:45.431154  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:45.930742  522370 type.go:168] "Request Body" body=""
	I1206 10:33:45.930816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:45.931177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:46.430876  522370 type.go:168] "Request Body" body=""
	I1206 10:33:46.430959  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:46.431313  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:46.930987  522370 type.go:168] "Request Body" body=""
	I1206 10:33:46.931061  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:46.931413  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:46.931474  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:47.430745  522370 type.go:168] "Request Body" body=""
	I1206 10:33:47.430826  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:47.431205  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:47.931381  522370 type.go:168] "Request Body" body=""
	I1206 10:33:47.931468  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:47.931814  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:48.431456  522370 type.go:168] "Request Body" body=""
	I1206 10:33:48.431530  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:48.431817  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:48.931583  522370 type.go:168] "Request Body" body=""
	I1206 10:33:48.931659  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:48.932002  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:48.932055  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:49.431685  522370 type.go:168] "Request Body" body=""
	I1206 10:33:49.431764  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:49.432103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:49.930780  522370 type.go:168] "Request Body" body=""
	I1206 10:33:49.930855  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:49.931113  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:50.430744  522370 type.go:168] "Request Body" body=""
	I1206 10:33:50.430816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:50.431162  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:50.930749  522370 type.go:168] "Request Body" body=""
	I1206 10:33:50.930827  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:50.931179  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:51.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:33:51.430805  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:51.431143  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:51.431197  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:51.930877  522370 type.go:168] "Request Body" body=""
	I1206 10:33:51.930958  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:51.931307  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:52.431057  522370 type.go:168] "Request Body" body=""
	I1206 10:33:52.431157  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:52.431503  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:52.931288  522370 type.go:168] "Request Body" body=""
	I1206 10:33:52.931355  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:52.931612  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:53.431346  522370 type.go:168] "Request Body" body=""
	I1206 10:33:53.431421  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:53.431742  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:53.431799  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:53.931572  522370 type.go:168] "Request Body" body=""
	I1206 10:33:53.931647  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:53.931997  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:54.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:33:54.430806  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:54.431078  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:54.930776  522370 type.go:168] "Request Body" body=""
	I1206 10:33:54.930849  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:54.931177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:55.430770  522370 type.go:168] "Request Body" body=""
	I1206 10:33:55.430844  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:55.431172  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:55.930729  522370 type.go:168] "Request Body" body=""
	I1206 10:33:55.930801  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:55.931082  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:55.931151  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:56.430895  522370 type.go:168] "Request Body" body=""
	I1206 10:33:56.430967  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:56.431320  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:56.931032  522370 type.go:168] "Request Body" body=""
	I1206 10:33:56.931110  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:56.931459  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:57.430829  522370 type.go:168] "Request Body" body=""
	I1206 10:33:57.430902  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:57.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:57.931275  522370 type.go:168] "Request Body" body=""
	I1206 10:33:57.931349  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:57.931687  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:57.931743  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:58.431516  522370 type.go:168] "Request Body" body=""
	I1206 10:33:58.431614  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:58.431939  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:58.930634  522370 type.go:168] "Request Body" body=""
	I1206 10:33:58.930706  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:58.930955  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:59.430684  522370 type.go:168] "Request Body" body=""
	I1206 10:33:59.430758  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:59.431050  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:59.930637  522370 type.go:168] "Request Body" body=""
	I1206 10:33:59.930735  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:59.931074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:00.430781  522370 type.go:168] "Request Body" body=""
	I1206 10:34:00.430869  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:00.431217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:00.431315  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:00.930729  522370 type.go:168] "Request Body" body=""
	I1206 10:34:00.930820  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:00.931148  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:01.430848  522370 type.go:168] "Request Body" body=""
	I1206 10:34:01.430922  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:01.431286  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:01.930713  522370 type.go:168] "Request Body" body=""
	I1206 10:34:01.930791  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:01.931110  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:02.430761  522370 type.go:168] "Request Body" body=""
	I1206 10:34:02.430835  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:02.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:02.931031  522370 type.go:168] "Request Body" body=""
	I1206 10:34:02.931109  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:02.931453  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:02.931514  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:03.430966  522370 type.go:168] "Request Body" body=""
	I1206 10:34:03.431062  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:03.431375  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:03.930731  522370 type.go:168] "Request Body" body=""
	I1206 10:34:03.930814  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:03.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:04.430751  522370 type.go:168] "Request Body" body=""
	I1206 10:34:04.430825  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:04.431168  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:04.930717  522370 type.go:168] "Request Body" body=""
	I1206 10:34:04.930787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:04.931097  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:05.430797  522370 type.go:168] "Request Body" body=""
	I1206 10:34:05.430873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:05.431234  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:05.431295  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:05.930980  522370 type.go:168] "Request Body" body=""
	I1206 10:34:05.931058  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:05.931414  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:06.430713  522370 type.go:168] "Request Body" body=""
	I1206 10:34:06.430787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:06.431089  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:06.930764  522370 type.go:168] "Request Body" body=""
	I1206 10:34:06.930844  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:06.931244  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:07.430820  522370 type.go:168] "Request Body" body=""
	I1206 10:34:07.430894  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:07.431251  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:07.931445  522370 type.go:168] "Request Body" body=""
	I1206 10:34:07.931516  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:07.931771  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:07.931812  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:08.431524  522370 type.go:168] "Request Body" body=""
	I1206 10:34:08.431601  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:08.431921  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:08.930678  522370 type.go:168] "Request Body" body=""
	I1206 10:34:08.930767  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:08.931174  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:09.430817  522370 type.go:168] "Request Body" body=""
	I1206 10:34:09.430892  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:09.431194  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:09.930925  522370 type.go:168] "Request Body" body=""
	I1206 10:34:09.931018  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:09.931371  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:10.430770  522370 type.go:168] "Request Body" body=""
	I1206 10:34:10.430853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:10.431202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:10.431255  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:10.930764  522370 type.go:168] "Request Body" body=""
	I1206 10:34:10.930831  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:10.931090  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:11.430809  522370 type.go:168] "Request Body" body=""
	I1206 10:34:11.430882  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:11.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:11.930776  522370 type.go:168] "Request Body" body=""
	I1206 10:34:11.930851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:11.931212  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:12.430751  522370 type.go:168] "Request Body" body=""
	I1206 10:34:12.430822  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:12.431076  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:12.930963  522370 type.go:168] "Request Body" body=""
	I1206 10:34:12.931034  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:12.931391  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:12.931447  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:13.430984  522370 type.go:168] "Request Body" body=""
	I1206 10:34:13.431059  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:13.431405  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:13.930730  522370 type.go:168] "Request Body" body=""
	I1206 10:34:13.930807  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:13.931082  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:14.430699  522370 type.go:168] "Request Body" body=""
	I1206 10:34:14.430785  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:14.431147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:14.930773  522370 type.go:168] "Request Body" body=""
	I1206 10:34:14.930855  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:14.931210  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:15.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:34:15.430808  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:15.431058  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:15.431101  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:15.930737  522370 type.go:168] "Request Body" body=""
	I1206 10:34:15.930809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:15.931163  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:16.430877  522370 type.go:168] "Request Body" body=""
	I1206 10:34:16.430949  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:16.431309  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:16.930711  522370 type.go:168] "Request Body" body=""
	I1206 10:34:16.930788  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:16.931088  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:17.430798  522370 type.go:168] "Request Body" body=""
	I1206 10:34:17.430879  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:17.431230  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:17.431288  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:17.931511  522370 type.go:168] "Request Body" body=""
	I1206 10:34:17.931612  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:17.931976  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:18.431590  522370 type.go:168] "Request Body" body=""
	I1206 10:34:18.431659  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:18.432004  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:18.930728  522370 type.go:168] "Request Body" body=""
	I1206 10:34:18.930808  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:18.931147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:19.430863  522370 type.go:168] "Request Body" body=""
	I1206 10:34:19.430939  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:19.431293  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:19.431346  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:19.930992  522370 type.go:168] "Request Body" body=""
	I1206 10:34:19.931064  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:19.931410  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:20.430778  522370 type.go:168] "Request Body" body=""
	I1206 10:34:20.430854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:20.431219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:20.931558  522370 type.go:168] "Request Body" body=""
	I1206 10:34:20.931639  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:20.931987  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:21.430701  522370 type.go:168] "Request Body" body=""
	I1206 10:34:21.430786  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:21.431147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:21.930753  522370 type.go:168] "Request Body" body=""
	I1206 10:34:21.930827  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:21.931172  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:21.931232  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:22.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:34:22.430999  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:22.431346  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:22.931270  522370 type.go:168] "Request Body" body=""
	I1206 10:34:22.931368  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:22.931817  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:23.431585  522370 type.go:168] "Request Body" body=""
	I1206 10:34:23.431659  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:23.431973  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:23.930685  522370 type.go:168] "Request Body" body=""
	I1206 10:34:23.930759  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:23.931087  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:24.430797  522370 type.go:168] "Request Body" body=""
	I1206 10:34:24.430872  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:24.431117  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:24.431176  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:24.930806  522370 type.go:168] "Request Body" body=""
	I1206 10:34:24.930882  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:24.931202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:25.430788  522370 type.go:168] "Request Body" body=""
	I1206 10:34:25.430861  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:25.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:25.930868  522370 type.go:168] "Request Body" body=""
	I1206 10:34:25.930939  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:25.931218  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:26.430758  522370 type.go:168] "Request Body" body=""
	I1206 10:34:26.430834  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:26.431213  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:26.431274  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:26.930768  522370 type.go:168] "Request Body" body=""
	I1206 10:34:26.930845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:26.931192  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:27.430884  522370 type.go:168] "Request Body" body=""
	I1206 10:34:27.430960  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:27.431252  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:27.931325  522370 type.go:168] "Request Body" body=""
	I1206 10:34:27.931408  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:27.931744  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:28.431435  522370 type.go:168] "Request Body" body=""
	I1206 10:34:28.431523  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:28.431850  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:28.431909  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:28.931644  522370 type.go:168] "Request Body" body=""
	I1206 10:34:28.931714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:28.931970  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:29.430721  522370 type.go:168] "Request Body" body=""
	I1206 10:34:29.430803  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:29.431141  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:29.930784  522370 type.go:168] "Request Body" body=""
	I1206 10:34:29.930859  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:29.931176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:30.430844  522370 type.go:168] "Request Body" body=""
	I1206 10:34:30.430919  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:30.431210  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:30.930768  522370 type.go:168] "Request Body" body=""
	I1206 10:34:30.930851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:30.931235  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:30.931295  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:31.430810  522370 type.go:168] "Request Body" body=""
	I1206 10:34:31.430887  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:31.431198  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:31.930745  522370 type.go:168] "Request Body" body=""
	I1206 10:34:31.930813  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:31.931077  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:32.430753  522370 type.go:168] "Request Body" body=""
	I1206 10:34:32.430840  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:32.431195  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:32.931075  522370 type.go:168] "Request Body" body=""
	I1206 10:34:32.931167  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:32.931468  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:32.931518  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:33.431100  522370 type.go:168] "Request Body" body=""
	I1206 10:34:33.431184  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:33.431485  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:33.930775  522370 type.go:168] "Request Body" body=""
	I1206 10:34:33.930855  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:33.931221  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:34.430796  522370 type.go:168] "Request Body" body=""
	I1206 10:34:34.430877  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:34.431210  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:34.930739  522370 type.go:168] "Request Body" body=""
	I1206 10:34:34.930818  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:34.931162  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:35.430773  522370 type.go:168] "Request Body" body=""
	I1206 10:34:35.430856  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:35.431214  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:35.431268  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:35.930868  522370 type.go:168] "Request Body" body=""
	I1206 10:34:35.930944  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:35.931315  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:36.430720  522370 type.go:168] "Request Body" body=""
	I1206 10:34:36.430791  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:36.431040  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:36.930739  522370 type.go:168] "Request Body" body=""
	I1206 10:34:36.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:36.931195  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:37.430910  522370 type.go:168] "Request Body" body=""
	I1206 10:34:37.430986  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:37.431301  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:37.431348  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:37.931302  522370 type.go:168] "Request Body" body=""
	I1206 10:34:37.931371  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:37.931629  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:38.431530  522370 type.go:168] "Request Body" body=""
	I1206 10:34:38.431619  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:38.431930  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:38.930656  522370 type.go:168] "Request Body" body=""
	I1206 10:34:38.930736  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:38.931104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:39.430791  522370 type.go:168] "Request Body" body=""
	I1206 10:34:39.430869  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:39.431157  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:39.930904  522370 type.go:168] "Request Body" body=""
	I1206 10:34:39.930984  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:39.931350  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:39.931412  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:40.431091  522370 type.go:168] "Request Body" body=""
	I1206 10:34:40.431191  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:40.431534  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:40.931277  522370 type.go:168] "Request Body" body=""
	I1206 10:34:40.931349  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:40.931605  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:41.431406  522370 type.go:168] "Request Body" body=""
	I1206 10:34:41.431517  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:41.431838  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:41.931609  522370 type.go:168] "Request Body" body=""
	I1206 10:34:41.931696  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:41.932047  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:41.932102  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:42.430748  522370 type.go:168] "Request Body" body=""
	I1206 10:34:42.430824  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:42.431103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:42.931215  522370 type.go:168] "Request Body" body=""
	I1206 10:34:42.931317  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:42.931648  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:43.431450  522370 type.go:168] "Request Body" body=""
	I1206 10:34:43.431526  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:43.431858  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:43.931579  522370 type.go:168] "Request Body" body=""
	I1206 10:34:43.931659  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:43.931991  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:44.431656  522370 type.go:168] "Request Body" body=""
	I1206 10:34:44.431730  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:44.432129  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:44.432185  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:44.930734  522370 type.go:168] "Request Body" body=""
	I1206 10:34:44.930810  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:44.931202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:45.430889  522370 type.go:168] "Request Body" body=""
	I1206 10:34:45.430961  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:45.431255  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:45.930943  522370 type.go:168] "Request Body" body=""
	I1206 10:34:45.931026  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:45.931431  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:46.430744  522370 type.go:168] "Request Body" body=""
	I1206 10:34:46.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:46.431156  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:46.930832  522370 type.go:168] "Request Body" body=""
	I1206 10:34:46.930896  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:46.931177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:46.931219  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:47.430865  522370 type.go:168] "Request Body" body=""
	I1206 10:34:47.430941  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:47.431318  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:47.931392  522370 type.go:168] "Request Body" body=""
	I1206 10:34:47.931469  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:47.931802  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:48.431602  522370 type.go:168] "Request Body" body=""
	I1206 10:34:48.431696  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:48.432026  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:48.930775  522370 type.go:168] "Request Body" body=""
	I1206 10:34:48.930851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:48.931294  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:48.931353  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:49.431025  522370 type.go:168] "Request Body" body=""
	I1206 10:34:49.431108  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:49.431448  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:49.930724  522370 type.go:168] "Request Body" body=""
	I1206 10:34:49.930802  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:49.931116  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:50.430791  522370 type.go:168] "Request Body" body=""
	I1206 10:34:50.430867  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:50.431248  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:50.930784  522370 type.go:168] "Request Body" body=""
	I1206 10:34:50.930864  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:50.931205  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:51.430733  522370 type.go:168] "Request Body" body=""
	I1206 10:34:51.430811  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:51.431080  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:51.431150  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:51.930837  522370 type.go:168] "Request Body" body=""
	I1206 10:34:51.930930  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:51.931324  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:52.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:34:52.430851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:52.431202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:52.931267  522370 type.go:168] "Request Body" body=""
	I1206 10:34:52.931348  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:52.931664  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:53.431501  522370 type.go:168] "Request Body" body=""
	I1206 10:34:53.431595  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:53.431957  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:53.432013  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:53.931658  522370 type.go:168] "Request Body" body=""
	I1206 10:34:53.931738  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:53.932077  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:54.430749  522370 type.go:168] "Request Body" body=""
	I1206 10:34:54.430871  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:54.431247  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:54.930761  522370 type.go:168] "Request Body" body=""
	I1206 10:34:54.930837  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:54.931206  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:55.430922  522370 type.go:168] "Request Body" body=""
	I1206 10:34:55.431013  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:55.431352  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:55.930722  522370 type.go:168] "Request Body" body=""
	I1206 10:34:55.930788  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:55.931160  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:55.931217  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:56.430917  522370 type.go:168] "Request Body" body=""
	I1206 10:34:56.430995  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:56.431296  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:56.930987  522370 type.go:168] "Request Body" body=""
	I1206 10:34:56.931062  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:56.931423  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:57.430960  522370 type.go:168] "Request Body" body=""
	I1206 10:34:57.431029  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:57.431303  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:57.931550  522370 type.go:168] "Request Body" body=""
	I1206 10:34:57.931631  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:57.931966  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:57.932029  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:58.430730  522370 type.go:168] "Request Body" body=""
	I1206 10:34:58.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:58.431155  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:58.930843  522370 type.go:168] "Request Body" body=""
	I1206 10:34:58.930914  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:58.931207  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:59.430875  522370 type.go:168] "Request Body" body=""
	I1206 10:34:59.430950  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:59.431266  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:59.930814  522370 type.go:168] "Request Body" body=""
	I1206 10:34:59.930906  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:59.931260  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:00.430976  522370 type.go:168] "Request Body" body=""
	I1206 10:35:00.431061  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:00.431541  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:00.431605  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:00.931369  522370 type.go:168] "Request Body" body=""
	I1206 10:35:00.931476  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:00.931758  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:01.431561  522370 type.go:168] "Request Body" body=""
	I1206 10:35:01.431652  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:01.432065  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:01.930651  522370 type.go:168] "Request Body" body=""
	I1206 10:35:01.930724  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:01.930990  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:02.430729  522370 type.go:168] "Request Body" body=""
	I1206 10:35:02.430828  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:02.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:02.931011  522370 type.go:168] "Request Body" body=""
	I1206 10:35:02.931089  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:02.931442  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:02.931498  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:03.430733  522370 type.go:168] "Request Body" body=""
	I1206 10:35:03.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:03.431059  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:03.930760  522370 type.go:168] "Request Body" body=""
	I1206 10:35:03.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:03.931180  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:04.430888  522370 type.go:168] "Request Body" body=""
	I1206 10:35:04.430974  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:04.431297  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:04.930734  522370 type.go:168] "Request Body" body=""
	I1206 10:35:04.930812  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:04.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:05.430766  522370 type.go:168] "Request Body" body=""
	I1206 10:35:05.430845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:05.431226  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:05.431281  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:05.930825  522370 type.go:168] "Request Body" body=""
	I1206 10:35:05.930901  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:05.931256  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:06.430727  522370 type.go:168] "Request Body" body=""
	I1206 10:35:06.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:06.431148  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:06.930753  522370 type.go:168] "Request Body" body=""
	I1206 10:35:06.930834  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:06.931217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:07.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:35:07.430991  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:07.431345  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:07.431402  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:07.931619  522370 type.go:168] "Request Body" body=""
	I1206 10:35:07.931687  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:07.931937  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:08.430638  522370 type.go:168] "Request Body" body=""
	I1206 10:35:08.430708  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:08.431059  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:08.930771  522370 type.go:168] "Request Body" body=""
	I1206 10:35:08.930854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:08.931232  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:09.430960  522370 type.go:168] "Request Body" body=""
	I1206 10:35:09.431028  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:09.431338  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:09.930769  522370 type.go:168] "Request Body" body=""
	I1206 10:35:09.930843  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:09.931199  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:09.931252  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:10.430779  522370 type.go:168] "Request Body" body=""
	I1206 10:35:10.430854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:10.431226  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:10.930761  522370 type.go:168] "Request Body" body=""
	I1206 10:35:10.930829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:10.931111  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:11.430901  522370 type.go:168] "Request Body" body=""
	I1206 10:35:11.430975  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:11.431323  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:11.930769  522370 type.go:168] "Request Body" body=""
	I1206 10:35:11.930846  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:11.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:12.430718  522370 type.go:168] "Request Body" body=""
	I1206 10:35:12.430798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:12.431146  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:12.431211  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:12.931230  522370 type.go:168] "Request Body" body=""
	I1206 10:35:12.931308  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:12.931636  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:13.431462  522370 type.go:168] "Request Body" body=""
	I1206 10:35:13.431538  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:13.431885  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:13.931641  522370 type.go:168] "Request Body" body=""
	I1206 10:35:13.931714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:13.931987  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:14.430767  522370 type.go:168] "Request Body" body=""
	I1206 10:35:14.430841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:14.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:14.431257  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:14.930975  522370 type.go:168] "Request Body" body=""
	I1206 10:35:14.931053  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:14.931466  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:15.431217  522370 type.go:168] "Request Body" body=""
	I1206 10:35:15.431297  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:15.431580  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:15.931377  522370 type.go:168] "Request Body" body=""
	I1206 10:35:15.931454  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:15.931796  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:16.431484  522370 type.go:168] "Request Body" body=""
	I1206 10:35:16.431559  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:16.431888  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:16.431945  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:16.931644  522370 type.go:168] "Request Body" body=""
	I1206 10:35:16.931713  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:16.931977  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:17.430728  522370 type.go:168] "Request Body" body=""
	I1206 10:35:17.430856  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:17.431208  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:17.931466  522370 type.go:168] "Request Body" body=""
	I1206 10:35:17.931549  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:17.931886  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:18.431642  522370 type.go:168] "Request Body" body=""
	I1206 10:35:18.431714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:18.431964  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:18.432006  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:18.930687  522370 type.go:168] "Request Body" body=""
	I1206 10:35:18.930760  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:18.931117  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:19.430852  522370 type.go:168] "Request Body" body=""
	I1206 10:35:19.430938  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:19.431325  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:19.930751  522370 type.go:168] "Request Body" body=""
	I1206 10:35:19.930852  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:19.931255  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:20.430723  522370 type.go:168] "Request Body" body=""
	I1206 10:35:20.430804  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:20.431177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:20.930767  522370 type.go:168] "Request Body" body=""
	I1206 10:35:20.930845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:20.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:20.931244  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:21.430727  522370 type.go:168] "Request Body" body=""
	I1206 10:35:21.430804  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:21.431059  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:21.930732  522370 type.go:168] "Request Body" body=""
	I1206 10:35:21.930815  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:21.931186  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:22.430734  522370 type.go:168] "Request Body" body=""
	I1206 10:35:22.430810  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:22.431194  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:22.931191  522370 type.go:168] "Request Body" body=""
	I1206 10:35:22.931266  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:22.931524  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:22.931567  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:23.431346  522370 type.go:168] "Request Body" body=""
	I1206 10:35:23.431424  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:23.431932  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:23.930769  522370 type.go:168] "Request Body" body=""
	I1206 10:35:23.930843  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:23.931196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:24.430741  522370 type.go:168] "Request Body" body=""
	I1206 10:35:24.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:24.431074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:24.930743  522370 type.go:168] "Request Body" body=""
	I1206 10:35:24.930825  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:24.931196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:25.430898  522370 type.go:168] "Request Body" body=""
	I1206 10:35:25.430975  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:25.431343  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:25.431399  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:25.931031  522370 type.go:168] "Request Body" body=""
	I1206 10:35:25.931103  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:25.931404  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:26.430767  522370 type.go:168] "Request Body" body=""
	I1206 10:35:26.430843  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:26.431170  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:26.930780  522370 type.go:168] "Request Body" body=""
	I1206 10:35:26.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:26.931215  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:27.430765  522370 type.go:168] "Request Body" body=""
	I1206 10:35:27.430836  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:27.431109  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:27.931322  522370 type.go:168] "Request Body" body=""
	I1206 10:35:27.931408  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:27.931759  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:27.931820  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:28.430752  522370 type.go:168] "Request Body" body=""
	I1206 10:35:28.430847  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:28.431179  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:28.930742  522370 type.go:168] "Request Body" body=""
	I1206 10:35:28.930795  522370 node_ready.go:38] duration metric: took 6m0.000265171s for node "functional-123579" to be "Ready" ...
	I1206 10:35:28.934235  522370 out.go:203] 
	W1206 10:35:28.937230  522370 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 10:35:28.937255  522370 out.go:285] * 
	W1206 10:35:28.939411  522370 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:35:28.942269  522370 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:35:37 functional-123579 crio[5369]: time="2025-12-06T10:35:37.825946902Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=26927795-6759-44f8-add4-c26f908ce36b name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:38 functional-123579 crio[5369]: time="2025-12-06T10:35:38.901587472Z" level=info msg="Checking image status: minikube-local-cache-test:functional-123579" id=88bd426f-8c5f-4fbf-aba5-2f5838db28ed name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:38 functional-123579 crio[5369]: time="2025-12-06T10:35:38.901763378Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 10:35:38 functional-123579 crio[5369]: time="2025-12-06T10:35:38.901802261Z" level=info msg="Image minikube-local-cache-test:functional-123579 not found" id=88bd426f-8c5f-4fbf-aba5-2f5838db28ed name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:38 functional-123579 crio[5369]: time="2025-12-06T10:35:38.901873653Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-123579 found" id=88bd426f-8c5f-4fbf-aba5-2f5838db28ed name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:38 functional-123579 crio[5369]: time="2025-12-06T10:35:38.925177932Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-123579" id=106d2dcb-4ec2-48ec-9614-c919c8de59ae name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:38 functional-123579 crio[5369]: time="2025-12-06T10:35:38.925312296Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-123579 not found" id=106d2dcb-4ec2-48ec-9614-c919c8de59ae name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:38 functional-123579 crio[5369]: time="2025-12-06T10:35:38.925358605Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-123579 found" id=106d2dcb-4ec2-48ec-9614-c919c8de59ae name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:38 functional-123579 crio[5369]: time="2025-12-06T10:35:38.949512637Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-123579" id=bedae279-2bdf-4e1d-bd1a-fd270469b349 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:38 functional-123579 crio[5369]: time="2025-12-06T10:35:38.949644146Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-123579 not found" id=bedae279-2bdf-4e1d-bd1a-fd270469b349 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:38 functional-123579 crio[5369]: time="2025-12-06T10:35:38.949683243Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-123579 found" id=bedae279-2bdf-4e1d-bd1a-fd270469b349 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:39 functional-123579 crio[5369]: time="2025-12-06T10:35:39.957711018Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=841d613d-f718-4e71-9062-0e61fb23cf91 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.314941799Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=a2b6cfb4-2779-4f28-9ac1-b03566db9430 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.315155956Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=a2b6cfb4-2779-4f28-9ac1-b03566db9430 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.315207007Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=a2b6cfb4-2779-4f28-9ac1-b03566db9430 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.937195525Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=a10dd09a-06ad-4ccd-95d6-4421c496e934 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.937380563Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=a10dd09a-06ad-4ccd-95d6-4421c496e934 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.937444512Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=a10dd09a-06ad-4ccd-95d6-4421c496e934 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.969660306Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=72a41cda-abbf-41f3-942e-80fc94d1c824 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.96981396Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=72a41cda-abbf-41f3-942e-80fc94d1c824 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.969854682Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=72a41cda-abbf-41f3-942e-80fc94d1c824 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.994752542Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=e8c13efc-96c7-4c82-882d-ac67484cb38d name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.994886389Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=e8c13efc-96c7-4c82-882d-ac67484cb38d name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.994922737Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=e8c13efc-96c7-4c82-882d-ac67484cb38d name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:41 functional-123579 crio[5369]: time="2025-12-06T10:35:41.547211049Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=88a049bd-ddb4-4825-8e05-272f51d7ed42 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:35:43.217208    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:35:43.217856    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:35:43.219629    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:35:43.220205    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:35:43.221748    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:20] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000983] FS-Cache: O-cookie d=000000005fa08aa9{9P.session} n=00000000effdd306
	[  +0.001108] FS-Cache: O-key=[10] '34323935383339353739'
	[  +0.000774] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001064] FS-Cache: N-cookie d=000000005fa08aa9{9P.session} n=00000000d1a54e80
	[  +0.001158] FS-Cache: N-key=[10] '34323935383339353739'
	[Dec 6 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.091742] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:17] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:18] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:35:43 up  3:18,  0 user,  load average: 0.18, 0.28, 0.81
	Linux functional-123579 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:35:40 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:35:41 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1154.
	Dec 06 10:35:41 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:41 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:41 functional-123579 kubelet[9301]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:41 functional-123579 kubelet[9301]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:41 functional-123579 kubelet[9301]: E1206 10:35:41.744264    9301 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:35:41 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:35:41 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:35:42 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1155.
	Dec 06 10:35:42 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:42 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:42 functional-123579 kubelet[9322]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:42 functional-123579 kubelet[9322]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:42 functional-123579 kubelet[9322]: E1206 10:35:42.493853    9322 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:35:42 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:35:42 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:35:43 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1156.
	Dec 06 10:35:43 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:43 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:43 functional-123579 kubelet[9414]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:43 functional-123579 kubelet[9414]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:43 functional-123579 kubelet[9414]: E1206 10:35:43.236206    9414 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:35:43 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:35:43 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579: exit status 2 (361.779077ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-123579" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-123579 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-123579 get pods: exit status 1 (112.500003ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-123579 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-123579
helpers_test.go:243: (dbg) docker inspect functional-123579:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	        "Created": "2025-12-06T10:21:05.490589445Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 516908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:21:05.573219423Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hostname",
	        "HostsPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hosts",
	        "LogPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721-json.log",
	        "Name": "/functional-123579",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-123579:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-123579",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	                "LowerDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f-init/diff:/var/lib/docker/overlay2/cc06c0f1f442a7275dc247974ca9074508813cfb842de89bc5bb1dae1e824222/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-123579",
	                "Source": "/var/lib/docker/volumes/functional-123579/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-123579",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-123579",
	                "name.minikube.sigs.k8s.io": "functional-123579",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10921d51d4ec866d78853297249318b04ef864639c8e07349985c5733ba03a26",
	            "SandboxKey": "/var/run/docker/netns/10921d51d4ec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-123579": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:5b:29:c4:a4:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa75a7cb7ddfb7086d66f629904d681a84e2c9da78725396c4dc859cfc5aa536",
	                    "EndpointID": "eff9632b5a6c335169f4a61b3c9f1727c30b30183ac61ac9730ddb7b0d19cf24",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-123579",
	                        "86e8d3865f80"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579: exit status 2 (315.823444ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-123579 logs -n 25: (1.167794238s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-137526 image ls --format short --alsologtostderr                                                                                       │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image   │ functional-137526 image ls --format yaml --alsologtostderr                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ ssh     │ functional-137526 ssh pgrep buildkitd                                                                                                             │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ image   │ functional-137526 image build -t localhost/my-image:functional-137526 testdata/build --alsologtostderr                                            │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image   │ functional-137526 image ls --format json --alsologtostderr                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image   │ functional-137526 image ls --format table --alsologtostderr                                                                                       │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image   │ functional-137526 image ls                                                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ delete  │ -p functional-137526                                                                                                                              │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:21 UTC │
	│ start   │ -p functional-123579 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:21 UTC │                     │
	│ start   │ -p functional-123579 --alsologtostderr -v=8                                                                                                       │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:29 UTC │                     │
	│ cache   │ functional-123579 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ functional-123579 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ functional-123579 cache add registry.k8s.io/pause:latest                                                                                          │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ functional-123579 cache add minikube-local-cache-test:functional-123579                                                                           │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ functional-123579 cache delete minikube-local-cache-test:functional-123579                                                                        │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ ssh     │ functional-123579 ssh sudo crictl images                                                                                                          │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ ssh     │ functional-123579 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ ssh     │ functional-123579 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │                     │
	│ cache   │ functional-123579 cache reload                                                                                                                    │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ ssh     │ functional-123579 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ kubectl │ functional-123579 kubectl -- --context functional-123579 get pods                                                                                 │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:29:22
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:29:22.870980  522370 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:29:22.871170  522370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:22.871181  522370 out.go:374] Setting ErrFile to fd 2...
	I1206 10:29:22.871187  522370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:22.871464  522370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:29:22.871865  522370 out.go:368] Setting JSON to false
	I1206 10:29:22.872761  522370 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11514,"bootTime":1765005449,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:29:22.872829  522370 start.go:143] virtualization:  
	I1206 10:29:22.876360  522370 out.go:179] * [functional-123579] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:29:22.880135  522370 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:29:22.880243  522370 notify.go:221] Checking for updates...
	I1206 10:29:22.885979  522370 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:29:22.888900  522370 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:22.891673  522370 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:29:22.894419  522370 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:29:22.897199  522370 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:29:22.900505  522370 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:29:22.900663  522370 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:29:22.930035  522370 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:29:22.930154  522370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:29:22.994169  522370 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:29:22.985097483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:29:22.994270  522370 docker.go:319] overlay module found
	I1206 10:29:22.997336  522370 out.go:179] * Using the docker driver based on existing profile
	I1206 10:29:23.000134  522370 start.go:309] selected driver: docker
	I1206 10:29:23.000177  522370 start.go:927] validating driver "docker" against &{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:29:23.000290  522370 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:29:23.000407  522370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:29:23.064912  522370 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:29:23.055716934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:29:23.065339  522370 cni.go:84] Creating CNI manager for ""
	I1206 10:29:23.065406  522370 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:29:23.065455  522370 start.go:353] cluster config:
	{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:29:23.068684  522370 out.go:179] * Starting "functional-123579" primary control-plane node in "functional-123579" cluster
	I1206 10:29:23.071544  522370 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:29:23.074549  522370 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:29:23.077588  522370 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:29:23.077638  522370 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:29:23.077648  522370 cache.go:65] Caching tarball of preloaded images
	I1206 10:29:23.077715  522370 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:29:23.077742  522370 preload.go:238] Found /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:29:23.077753  522370 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:29:23.077861  522370 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/config.json ...
	I1206 10:29:23.100973  522370 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:29:23.100996  522370 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:29:23.101011  522370 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:29:23.101047  522370 start.go:360] acquireMachinesLock for functional-123579: {Name:mk35a9adf20f50a3c49b774a4ee092917f16cc66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:29:23.101106  522370 start.go:364] duration metric: took 36.569µs to acquireMachinesLock for "functional-123579"
	I1206 10:29:23.101131  522370 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:29:23.101140  522370 fix.go:54] fixHost starting: 
	I1206 10:29:23.101403  522370 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:29:23.120661  522370 fix.go:112] recreateIfNeeded on functional-123579: state=Running err=<nil>
	W1206 10:29:23.120697  522370 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:29:23.124123  522370 out.go:252] * Updating the running docker "functional-123579" container ...
	I1206 10:29:23.124169  522370 machine.go:94] provisionDockerMachine start ...
	I1206 10:29:23.124278  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:23.148209  522370 main.go:143] libmachine: Using SSH client type: native
	I1206 10:29:23.148655  522370 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:29:23.148670  522370 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:29:23.311217  522370 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:29:23.311246  522370 ubuntu.go:182] provisioning hostname "functional-123579"
	I1206 10:29:23.311337  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:23.330615  522370 main.go:143] libmachine: Using SSH client type: native
	I1206 10:29:23.330948  522370 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:29:23.330967  522370 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-123579 && echo "functional-123579" | sudo tee /etc/hostname
	I1206 10:29:23.492326  522370 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:29:23.492442  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:23.511425  522370 main.go:143] libmachine: Using SSH client type: native
	I1206 10:29:23.511745  522370 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:29:23.511767  522370 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-123579' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-123579/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-123579' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:29:23.663802  522370 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:29:23.663828  522370 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-484819/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-484819/.minikube}
	I1206 10:29:23.663852  522370 ubuntu.go:190] setting up certificates
	I1206 10:29:23.663862  522370 provision.go:84] configureAuth start
	I1206 10:29:23.663938  522370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:29:23.683626  522370 provision.go:143] copyHostCerts
	I1206 10:29:23.683677  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 10:29:23.683720  522370 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem, removing ...
	I1206 10:29:23.683732  522370 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 10:29:23.683811  522370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem (1082 bytes)
	I1206 10:29:23.683905  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 10:29:23.683927  522370 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem, removing ...
	I1206 10:29:23.683935  522370 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 10:29:23.683965  522370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem (1123 bytes)
	I1206 10:29:23.684012  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 10:29:23.684032  522370 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem, removing ...
	I1206 10:29:23.684040  522370 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 10:29:23.684065  522370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem (1675 bytes)
	I1206 10:29:23.684117  522370 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem org=jenkins.functional-123579 san=[127.0.0.1 192.168.49.2 functional-123579 localhost minikube]
	I1206 10:29:23.851072  522370 provision.go:177] copyRemoteCerts
	I1206 10:29:23.851167  522370 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:29:23.851208  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:23.869258  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:23.976487  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 10:29:23.976551  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:29:23.994935  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 10:29:23.995001  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:29:24.028988  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 10:29:24.029065  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:29:24.047435  522370 provision.go:87] duration metric: took 383.548866ms to configureAuth
	I1206 10:29:24.047460  522370 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:29:24.047651  522370 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:29:24.047753  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.065906  522370 main.go:143] libmachine: Using SSH client type: native
	I1206 10:29:24.066279  522370 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:29:24.066304  522370 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:29:24.394899  522370 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:29:24.394922  522370 machine.go:97] duration metric: took 1.270744832s to provisionDockerMachine
	I1206 10:29:24.394933  522370 start.go:293] postStartSetup for "functional-123579" (driver="docker")
	I1206 10:29:24.394946  522370 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:29:24.395040  522370 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:29:24.395089  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.413037  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:24.518950  522370 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:29:24.522167  522370 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1206 10:29:24.522190  522370 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1206 10:29:24.522196  522370 command_runner.go:130] > VERSION_ID="12"
	I1206 10:29:24.522201  522370 command_runner.go:130] > VERSION="12 (bookworm)"
	I1206 10:29:24.522206  522370 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1206 10:29:24.522219  522370 command_runner.go:130] > ID=debian
	I1206 10:29:24.522224  522370 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1206 10:29:24.522228  522370 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1206 10:29:24.522234  522370 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1206 10:29:24.522273  522370 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:29:24.522296  522370 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:29:24.522307  522370 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/addons for local assets ...
	I1206 10:29:24.522366  522370 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/files for local assets ...
	I1206 10:29:24.522448  522370 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> 4880682.pem in /etc/ssl/certs
	I1206 10:29:24.522465  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> /etc/ssl/certs/4880682.pem
	I1206 10:29:24.522539  522370 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts -> hosts in /etc/test/nested/copy/488068
	I1206 10:29:24.522547  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts -> /etc/test/nested/copy/488068/hosts
	I1206 10:29:24.522590  522370 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488068
	I1206 10:29:24.529941  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:29:24.547406  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts --> /etc/test/nested/copy/488068/hosts (40 bytes)
	I1206 10:29:24.564885  522370 start.go:296] duration metric: took 169.937214ms for postStartSetup
	I1206 10:29:24.565009  522370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:29:24.565071  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.582051  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:24.684564  522370 command_runner.go:130] > 18%
	I1206 10:29:24.685308  522370 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:29:24.690194  522370 command_runner.go:130] > 161G
	I1206 10:29:24.690863  522370 fix.go:56] duration metric: took 1.589719046s for fixHost
	I1206 10:29:24.690882  522370 start.go:83] releasing machines lock for "functional-123579", held for 1.589762361s
	I1206 10:29:24.690959  522370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:29:24.710139  522370 ssh_runner.go:195] Run: cat /version.json
	I1206 10:29:24.710198  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.710437  522370 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:29:24.710491  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:24.744752  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:24.750995  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:24.850618  522370 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1206 10:29:24.850833  522370 ssh_runner.go:195] Run: systemctl --version
	I1206 10:29:24.941044  522370 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 10:29:24.943691  522370 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1206 10:29:24.943731  522370 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 10:29:24.943796  522370 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:29:24.982406  522370 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 10:29:24.986710  522370 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 10:29:24.986856  522370 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:29:24.986921  522370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:29:24.995206  522370 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:29:24.995230  522370 start.go:496] detecting cgroup driver to use...
	I1206 10:29:24.995260  522370 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:29:24.995314  522370 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:29:25.015488  522370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:29:25.029388  522370 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:29:25.029474  522370 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:29:25.044588  522370 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:29:25.057886  522370 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:29:25.175907  522370 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:29:25.297406  522370 docker.go:234] disabling docker service ...
	I1206 10:29:25.297502  522370 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:29:25.313940  522370 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:29:25.326948  522370 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:29:25.448237  522370 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:29:25.592886  522370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:29:25.605716  522370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:29:25.618765  522370 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1206 10:29:25.620045  522370 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:29:25.620120  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.628683  522370 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:29:25.628808  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.637855  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.646676  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.656251  522370 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:29:25.664395  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.673385  522370 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.681859  522370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:25.691317  522370 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:29:25.697883  522370 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 10:29:25.698954  522370 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:29:25.706470  522370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:29:25.835287  522370 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:29:25.994073  522370 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:29:25.994183  522370 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:29:25.998083  522370 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1206 10:29:25.998204  522370 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 10:29:25.998238  522370 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1206 10:29:25.998335  522370 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:29:25.998358  522370 command_runner.go:130] > Access: 2025-12-06 10:29:25.948140155 +0000
	I1206 10:29:25.998390  522370 command_runner.go:130] > Modify: 2025-12-06 10:29:25.948140155 +0000
	I1206 10:29:25.998420  522370 command_runner.go:130] > Change: 2025-12-06 10:29:25.948140155 +0000
	I1206 10:29:25.998437  522370 command_runner.go:130] >  Birth: -
	I1206 10:29:25.998473  522370 start.go:564] Will wait 60s for crictl version
	I1206 10:29:25.998553  522370 ssh_runner.go:195] Run: which crictl
	I1206 10:29:26.004847  522370 command_runner.go:130] > /usr/local/bin/crictl
	I1206 10:29:26.004981  522370 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:29:26.037391  522370 command_runner.go:130] > Version:  0.1.0
	I1206 10:29:26.037414  522370 command_runner.go:130] > RuntimeName:  cri-o
	I1206 10:29:26.037421  522370 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1206 10:29:26.037427  522370 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 10:29:26.037438  522370 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:29:26.037548  522370 ssh_runner.go:195] Run: crio --version
	I1206 10:29:26.065733  522370 command_runner.go:130] > crio version 1.34.3
	I1206 10:29:26.065769  522370 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1206 10:29:26.065793  522370 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1206 10:29:26.065805  522370 command_runner.go:130] >    GitTreeState:   dirty
	I1206 10:29:26.065811  522370 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1206 10:29:26.065822  522370 command_runner.go:130] >    GoVersion:      go1.24.6
	I1206 10:29:26.065827  522370 command_runner.go:130] >    Compiler:       gc
	I1206 10:29:26.065832  522370 command_runner.go:130] >    Platform:       linux/arm64
	I1206 10:29:26.065840  522370 command_runner.go:130] >    Linkmode:       static
	I1206 10:29:26.065845  522370 command_runner.go:130] >    BuildTags:
	I1206 10:29:26.065852  522370 command_runner.go:130] >      static
	I1206 10:29:26.065886  522370 command_runner.go:130] >      netgo
	I1206 10:29:26.065897  522370 command_runner.go:130] >      osusergo
	I1206 10:29:26.065918  522370 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1206 10:29:26.065928  522370 command_runner.go:130] >      seccomp
	I1206 10:29:26.065932  522370 command_runner.go:130] >      apparmor
	I1206 10:29:26.065941  522370 command_runner.go:130] >      selinux
	I1206 10:29:26.065946  522370 command_runner.go:130] >    LDFlags:          unknown
	I1206 10:29:26.065954  522370 command_runner.go:130] >    SeccompEnabled:   true
	I1206 10:29:26.065958  522370 command_runner.go:130] >    AppArmorEnabled:  false
	I1206 10:29:26.068082  522370 ssh_runner.go:195] Run: crio --version
	I1206 10:29:26.095375  522370 command_runner.go:130] > crio version 1.34.3
	I1206 10:29:26.095453  522370 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1206 10:29:26.095474  522370 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1206 10:29:26.095491  522370 command_runner.go:130] >    GitTreeState:   dirty
	I1206 10:29:26.095522  522370 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1206 10:29:26.095561  522370 command_runner.go:130] >    GoVersion:      go1.24.6
	I1206 10:29:26.095582  522370 command_runner.go:130] >    Compiler:       gc
	I1206 10:29:26.095622  522370 command_runner.go:130] >    Platform:       linux/arm64
	I1206 10:29:26.095651  522370 command_runner.go:130] >    Linkmode:       static
	I1206 10:29:26.095669  522370 command_runner.go:130] >    BuildTags:
	I1206 10:29:26.095698  522370 command_runner.go:130] >      static
	I1206 10:29:26.095717  522370 command_runner.go:130] >      netgo
	I1206 10:29:26.095735  522370 command_runner.go:130] >      osusergo
	I1206 10:29:26.095756  522370 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1206 10:29:26.095787  522370 command_runner.go:130] >      seccomp
	I1206 10:29:26.095810  522370 command_runner.go:130] >      apparmor
	I1206 10:29:26.095867  522370 command_runner.go:130] >      selinux
	I1206 10:29:26.095888  522370 command_runner.go:130] >    LDFlags:          unknown
	I1206 10:29:26.095910  522370 command_runner.go:130] >    SeccompEnabled:   true
	I1206 10:29:26.095930  522370 command_runner.go:130] >    AppArmorEnabled:  false
	I1206 10:29:26.103062  522370 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:29:26.105990  522370 cli_runner.go:164] Run: docker network inspect functional-123579 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:29:26.122102  522370 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:29:26.125939  522370 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1206 10:29:26.126304  522370 kubeadm.go:884] updating cluster {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:29:26.126416  522370 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:29:26.126475  522370 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:29:26.161627  522370 command_runner.go:130] > {
	I1206 10:29:26.161646  522370 command_runner.go:130] >   "images":  [
	I1206 10:29:26.161650  522370 command_runner.go:130] >     {
	I1206 10:29:26.161662  522370 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:29:26.161666  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161672  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:29:26.161676  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161681  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161689  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1206 10:29:26.161697  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1206 10:29:26.161702  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161707  522370 command_runner.go:130] >       "size":  "111333938",
	I1206 10:29:26.161711  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.161719  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.161729  522370 command_runner.go:130] >     },
	I1206 10:29:26.161732  522370 command_runner.go:130] >     {
	I1206 10:29:26.161739  522370 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:29:26.161743  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161748  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:29:26.161751  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161757  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161765  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1206 10:29:26.161774  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:29:26.161777  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161781  522370 command_runner.go:130] >       "size":  "29037500",
	I1206 10:29:26.161785  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.161792  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.161795  522370 command_runner.go:130] >     },
	I1206 10:29:26.161799  522370 command_runner.go:130] >     {
	I1206 10:29:26.161805  522370 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:29:26.161810  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161815  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:29:26.161818  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161822  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161830  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1206 10:29:26.161838  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1206 10:29:26.161843  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161847  522370 command_runner.go:130] >       "size":  "74491780",
	I1206 10:29:26.161851  522370 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:29:26.161856  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.161859  522370 command_runner.go:130] >     },
	I1206 10:29:26.161863  522370 command_runner.go:130] >     {
	I1206 10:29:26.161869  522370 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:29:26.161873  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161878  522370 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:29:26.161883  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161887  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161898  522370 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1206 10:29:26.161905  522370 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1206 10:29:26.161908  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161912  522370 command_runner.go:130] >       "size":  "60857170",
	I1206 10:29:26.161916  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.161920  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.161923  522370 command_runner.go:130] >       },
	I1206 10:29:26.161935  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.161939  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.161942  522370 command_runner.go:130] >     },
	I1206 10:29:26.161946  522370 command_runner.go:130] >     {
	I1206 10:29:26.161953  522370 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:29:26.161956  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.161963  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:29:26.161966  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161970  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.161978  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1206 10:29:26.161986  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1206 10:29:26.161990  522370 command_runner.go:130] >       ],
	I1206 10:29:26.161994  522370 command_runner.go:130] >       "size":  "84949999",
	I1206 10:29:26.161997  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.162001  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.162004  522370 command_runner.go:130] >       },
	I1206 10:29:26.162008  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162011  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.162014  522370 command_runner.go:130] >     },
	I1206 10:29:26.162018  522370 command_runner.go:130] >     {
	I1206 10:29:26.162024  522370 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:29:26.162028  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.162033  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:29:26.162037  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162041  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.162050  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1206 10:29:26.162067  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1206 10:29:26.162071  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162075  522370 command_runner.go:130] >       "size":  "72170325",
	I1206 10:29:26.162081  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.162091  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.162094  522370 command_runner.go:130] >       },
	I1206 10:29:26.162098  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162102  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.162105  522370 command_runner.go:130] >     },
	I1206 10:29:26.162115  522370 command_runner.go:130] >     {
	I1206 10:29:26.162123  522370 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:29:26.162128  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.162134  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:29:26.162137  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162143  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.162154  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1206 10:29:26.162163  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:29:26.162166  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162170  522370 command_runner.go:130] >       "size":  "74106775",
	I1206 10:29:26.162173  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162178  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.162181  522370 command_runner.go:130] >     },
	I1206 10:29:26.162184  522370 command_runner.go:130] >     {
	I1206 10:29:26.162191  522370 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:29:26.162194  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.162200  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:29:26.162203  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162207  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.162215  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1206 10:29:26.162232  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1206 10:29:26.162235  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162239  522370 command_runner.go:130] >       "size":  "49822549",
	I1206 10:29:26.162243  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.162250  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.162253  522370 command_runner.go:130] >       },
	I1206 10:29:26.162257  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162260  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.162263  522370 command_runner.go:130] >     },
	I1206 10:29:26.162267  522370 command_runner.go:130] >     {
	I1206 10:29:26.162273  522370 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:29:26.162277  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.162281  522370 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:29:26.162284  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162288  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.162296  522370 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1206 10:29:26.162304  522370 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1206 10:29:26.162307  522370 command_runner.go:130] >       ],
	I1206 10:29:26.162311  522370 command_runner.go:130] >       "size":  "519884",
	I1206 10:29:26.162315  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.162318  522370 command_runner.go:130] >         "value":  "65535"
	I1206 10:29:26.162321  522370 command_runner.go:130] >       },
	I1206 10:29:26.162325  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.162329  522370 command_runner.go:130] >       "pinned":  true
	I1206 10:29:26.162333  522370 command_runner.go:130] >     }
	I1206 10:29:26.162336  522370 command_runner.go:130] >   ]
	I1206 10:29:26.162339  522370 command_runner.go:130] > }
	I1206 10:29:26.164653  522370 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:29:26.164677  522370 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:29:26.164733  522370 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:29:26.190066  522370 command_runner.go:130] > {
	I1206 10:29:26.190096  522370 command_runner.go:130] >   "images":  [
	I1206 10:29:26.190102  522370 command_runner.go:130] >     {
	I1206 10:29:26.190111  522370 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:29:26.190116  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190122  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:29:26.190126  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190130  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190139  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1206 10:29:26.190147  522370 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1206 10:29:26.190155  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190160  522370 command_runner.go:130] >       "size":  "111333938",
	I1206 10:29:26.190164  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190168  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190171  522370 command_runner.go:130] >     },
	I1206 10:29:26.190174  522370 command_runner.go:130] >     {
	I1206 10:29:26.190181  522370 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:29:26.190184  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190189  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:29:26.190193  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190197  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190205  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1206 10:29:26.190213  522370 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:29:26.190216  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190220  522370 command_runner.go:130] >       "size":  "29037500",
	I1206 10:29:26.190224  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190229  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190232  522370 command_runner.go:130] >     },
	I1206 10:29:26.190235  522370 command_runner.go:130] >     {
	I1206 10:29:26.190241  522370 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:29:26.190245  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190250  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:29:26.190254  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190257  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190265  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1206 10:29:26.190273  522370 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1206 10:29:26.190277  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190281  522370 command_runner.go:130] >       "size":  "74491780",
	I1206 10:29:26.190285  522370 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:29:26.190289  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190292  522370 command_runner.go:130] >     },
	I1206 10:29:26.190295  522370 command_runner.go:130] >     {
	I1206 10:29:26.190301  522370 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:29:26.190308  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190313  522370 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:29:26.190317  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190322  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190329  522370 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1206 10:29:26.190336  522370 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1206 10:29:26.190339  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190343  522370 command_runner.go:130] >       "size":  "60857170",
	I1206 10:29:26.190346  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190350  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.190353  522370 command_runner.go:130] >       },
	I1206 10:29:26.190364  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190369  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190372  522370 command_runner.go:130] >     },
	I1206 10:29:26.190374  522370 command_runner.go:130] >     {
	I1206 10:29:26.190381  522370 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:29:26.190384  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190389  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:29:26.190392  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190396  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190403  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1206 10:29:26.190412  522370 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1206 10:29:26.190415  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190419  522370 command_runner.go:130] >       "size":  "84949999",
	I1206 10:29:26.190422  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190425  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.190428  522370 command_runner.go:130] >       },
	I1206 10:29:26.190432  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190436  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190439  522370 command_runner.go:130] >     },
	I1206 10:29:26.190441  522370 command_runner.go:130] >     {
	I1206 10:29:26.190448  522370 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:29:26.190452  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190460  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:29:26.190464  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190467  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190476  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1206 10:29:26.190484  522370 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1206 10:29:26.190486  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190490  522370 command_runner.go:130] >       "size":  "72170325",
	I1206 10:29:26.190493  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190497  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.190500  522370 command_runner.go:130] >       },
	I1206 10:29:26.190504  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190507  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190514  522370 command_runner.go:130] >     },
	I1206 10:29:26.190517  522370 command_runner.go:130] >     {
	I1206 10:29:26.190524  522370 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:29:26.190528  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190533  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:29:26.190536  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190540  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190547  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1206 10:29:26.190554  522370 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:29:26.190557  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190561  522370 command_runner.go:130] >       "size":  "74106775",
	I1206 10:29:26.190565  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190569  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190572  522370 command_runner.go:130] >     },
	I1206 10:29:26.190574  522370 command_runner.go:130] >     {
	I1206 10:29:26.190581  522370 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:29:26.190584  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190590  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:29:26.190593  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190597  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190604  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1206 10:29:26.190628  522370 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1206 10:29:26.190632  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190636  522370 command_runner.go:130] >       "size":  "49822549",
	I1206 10:29:26.190639  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190643  522370 command_runner.go:130] >         "value":  "0"
	I1206 10:29:26.190646  522370 command_runner.go:130] >       },
	I1206 10:29:26.190650  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190653  522370 command_runner.go:130] >       "pinned":  false
	I1206 10:29:26.190656  522370 command_runner.go:130] >     },
	I1206 10:29:26.190659  522370 command_runner.go:130] >     {
	I1206 10:29:26.190665  522370 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:29:26.190669  522370 command_runner.go:130] >       "repoTags":  [
	I1206 10:29:26.190673  522370 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:29:26.190676  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190680  522370 command_runner.go:130] >       "repoDigests":  [
	I1206 10:29:26.190687  522370 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1206 10:29:26.190694  522370 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1206 10:29:26.190697  522370 command_runner.go:130] >       ],
	I1206 10:29:26.190701  522370 command_runner.go:130] >       "size":  "519884",
	I1206 10:29:26.190705  522370 command_runner.go:130] >       "uid":  {
	I1206 10:29:26.190709  522370 command_runner.go:130] >         "value":  "65535"
	I1206 10:29:26.190712  522370 command_runner.go:130] >       },
	I1206 10:29:26.190716  522370 command_runner.go:130] >       "username":  "",
	I1206 10:29:26.190719  522370 command_runner.go:130] >       "pinned":  true
	I1206 10:29:26.190722  522370 command_runner.go:130] >     }
	I1206 10:29:26.190724  522370 command_runner.go:130] >   ]
	I1206 10:29:26.190728  522370 command_runner.go:130] > }
	I1206 10:29:26.192099  522370 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:29:26.192121  522370 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:29:26.192130  522370 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:29:26.192245  522370 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-123579 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:29:26.192338  522370 ssh_runner.go:195] Run: crio config
	I1206 10:29:26.220366  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.219989922Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1206 10:29:26.220411  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.220176363Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1206 10:29:26.220654  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.22050187Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1206 10:29:26.220871  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.220715248Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1206 10:29:26.221165  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.22098899Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:29:26.221621  522370 command_runner.go:130] ! time="2025-12-06T10:29:26.221432459Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1206 10:29:26.238478  522370 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1206 10:29:26.263608  522370 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1206 10:29:26.263638  522370 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1206 10:29:26.263647  522370 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1206 10:29:26.263651  522370 command_runner.go:130] > #
	I1206 10:29:26.263687  522370 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1206 10:29:26.263707  522370 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1206 10:29:26.263714  522370 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1206 10:29:26.263721  522370 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1206 10:29:26.263726  522370 command_runner.go:130] > # reload'.
	I1206 10:29:26.263732  522370 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1206 10:29:26.263756  522370 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1206 10:29:26.263778  522370 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1206 10:29:26.263789  522370 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1206 10:29:26.263793  522370 command_runner.go:130] > [crio]
	I1206 10:29:26.263802  522370 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1206 10:29:26.263811  522370 command_runner.go:130] > # containers images, in this directory.
	I1206 10:29:26.263826  522370 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1206 10:29:26.263848  522370 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1206 10:29:26.263868  522370 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1206 10:29:26.263877  522370 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1206 10:29:26.263885  522370 command_runner.go:130] > # imagestore = ""
	I1206 10:29:26.263894  522370 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1206 10:29:26.263901  522370 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1206 10:29:26.263908  522370 command_runner.go:130] > # storage_driver = "overlay"
	I1206 10:29:26.263914  522370 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1206 10:29:26.263920  522370 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1206 10:29:26.263936  522370 command_runner.go:130] > # storage_option = [
	I1206 10:29:26.263952  522370 command_runner.go:130] > # ]
	I1206 10:29:26.263965  522370 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1206 10:29:26.263972  522370 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1206 10:29:26.263985  522370 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1206 10:29:26.263995  522370 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1206 10:29:26.264002  522370 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1206 10:29:26.264006  522370 command_runner.go:130] > # always happen on a node reboot
	I1206 10:29:26.264013  522370 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1206 10:29:26.264036  522370 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1206 10:29:26.264050  522370 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1206 10:29:26.264055  522370 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1206 10:29:26.264060  522370 command_runner.go:130] > # version_file_persist = ""
	I1206 10:29:26.264078  522370 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1206 10:29:26.264092  522370 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1206 10:29:26.264096  522370 command_runner.go:130] > # internal_wipe = true
	I1206 10:29:26.264105  522370 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1206 10:29:26.264113  522370 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1206 10:29:26.264117  522370 command_runner.go:130] > # internal_repair = true
	I1206 10:29:26.264124  522370 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1206 10:29:26.264131  522370 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1206 10:29:26.264150  522370 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1206 10:29:26.264171  522370 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1206 10:29:26.264181  522370 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1206 10:29:26.264188  522370 command_runner.go:130] > [crio.api]
	I1206 10:29:26.264194  522370 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1206 10:29:26.264202  522370 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1206 10:29:26.264208  522370 command_runner.go:130] > # IP address on which the stream server will listen.
	I1206 10:29:26.264214  522370 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1206 10:29:26.264221  522370 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1206 10:29:26.264226  522370 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1206 10:29:26.264241  522370 command_runner.go:130] > # stream_port = "0"
	I1206 10:29:26.264256  522370 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1206 10:29:26.264261  522370 command_runner.go:130] > # stream_enable_tls = false
	I1206 10:29:26.264279  522370 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1206 10:29:26.264295  522370 command_runner.go:130] > # stream_idle_timeout = ""
	I1206 10:29:26.264302  522370 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1206 10:29:26.264317  522370 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1206 10:29:26.264326  522370 command_runner.go:130] > # stream_tls_cert = ""
	I1206 10:29:26.264332  522370 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1206 10:29:26.264338  522370 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1206 10:29:26.264355  522370 command_runner.go:130] > # stream_tls_key = ""
	I1206 10:29:26.264373  522370 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1206 10:29:26.264389  522370 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1206 10:29:26.264395  522370 command_runner.go:130] > # automatically pick up the changes.
	I1206 10:29:26.264399  522370 command_runner.go:130] > # stream_tls_ca = ""
	I1206 10:29:26.264435  522370 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1206 10:29:26.264448  522370 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1206 10:29:26.264456  522370 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1206 10:29:26.264460  522370 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1206 10:29:26.264467  522370 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1206 10:29:26.264476  522370 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1206 10:29:26.264479  522370 command_runner.go:130] > [crio.runtime]
	I1206 10:29:26.264489  522370 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1206 10:29:26.264495  522370 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1206 10:29:26.264506  522370 command_runner.go:130] > # "nofile=1024:2048"
	I1206 10:29:26.264513  522370 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1206 10:29:26.264524  522370 command_runner.go:130] > # default_ulimits = [
	I1206 10:29:26.264527  522370 command_runner.go:130] > # ]
	I1206 10:29:26.264534  522370 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1206 10:29:26.264543  522370 command_runner.go:130] > # no_pivot = false
	I1206 10:29:26.264549  522370 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1206 10:29:26.264555  522370 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1206 10:29:26.264561  522370 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1206 10:29:26.264569  522370 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1206 10:29:26.264576  522370 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1206 10:29:26.264584  522370 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 10:29:26.264591  522370 command_runner.go:130] > # conmon = ""
	I1206 10:29:26.264595  522370 command_runner.go:130] > # Cgroup setting for conmon
	I1206 10:29:26.264602  522370 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1206 10:29:26.264612  522370 command_runner.go:130] > conmon_cgroup = "pod"
	I1206 10:29:26.264623  522370 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1206 10:29:26.264629  522370 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1206 10:29:26.264643  522370 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 10:29:26.264647  522370 command_runner.go:130] > # conmon_env = [
	I1206 10:29:26.264650  522370 command_runner.go:130] > # ]
	I1206 10:29:26.264655  522370 command_runner.go:130] > # Additional environment variables to set for all the
	I1206 10:29:26.264660  522370 command_runner.go:130] > # containers. These are overridden if set in the
	I1206 10:29:26.264668  522370 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1206 10:29:26.264674  522370 command_runner.go:130] > # default_env = [
	I1206 10:29:26.264677  522370 command_runner.go:130] > # ]
	I1206 10:29:26.264683  522370 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1206 10:29:26.264699  522370 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1206 10:29:26.264703  522370 command_runner.go:130] > # selinux = false
	I1206 10:29:26.264710  522370 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1206 10:29:26.264720  522370 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1206 10:29:26.264729  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.264734  522370 command_runner.go:130] > # seccomp_profile = ""
	I1206 10:29:26.264740  522370 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1206 10:29:26.264745  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.264751  522370 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1206 10:29:26.264759  522370 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1206 10:29:26.264767  522370 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1206 10:29:26.264774  522370 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1206 10:29:26.264789  522370 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1206 10:29:26.264794  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.264799  522370 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1206 10:29:26.264807  522370 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1206 10:29:26.264817  522370 command_runner.go:130] > # the cgroup blockio controller.
	I1206 10:29:26.264821  522370 command_runner.go:130] > # blockio_config_file = ""
	I1206 10:29:26.264828  522370 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1206 10:29:26.264834  522370 command_runner.go:130] > # blockio parameters.
	I1206 10:29:26.264838  522370 command_runner.go:130] > # blockio_reload = false
	I1206 10:29:26.264849  522370 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1206 10:29:26.264856  522370 command_runner.go:130] > # irqbalance daemon.
	I1206 10:29:26.264862  522370 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1206 10:29:26.264868  522370 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1206 10:29:26.264877  522370 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1206 10:29:26.264889  522370 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1206 10:29:26.264897  522370 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1206 10:29:26.264904  522370 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1206 10:29:26.264910  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.264917  522370 command_runner.go:130] > # rdt_config_file = ""
	I1206 10:29:26.264922  522370 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1206 10:29:26.264926  522370 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1206 10:29:26.264932  522370 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1206 10:29:26.264936  522370 command_runner.go:130] > # separate_pull_cgroup = ""
	I1206 10:29:26.264946  522370 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1206 10:29:26.264954  522370 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1206 10:29:26.264958  522370 command_runner.go:130] > # will be added.
	I1206 10:29:26.264966  522370 command_runner.go:130] > # default_capabilities = [
	I1206 10:29:26.264970  522370 command_runner.go:130] > # 	"CHOWN",
	I1206 10:29:26.264974  522370 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1206 10:29:26.264986  522370 command_runner.go:130] > # 	"FSETID",
	I1206 10:29:26.264990  522370 command_runner.go:130] > # 	"FOWNER",
	I1206 10:29:26.264993  522370 command_runner.go:130] > # 	"SETGID",
	I1206 10:29:26.264996  522370 command_runner.go:130] > # 	"SETUID",
	I1206 10:29:26.265019  522370 command_runner.go:130] > # 	"SETPCAP",
	I1206 10:29:26.265029  522370 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1206 10:29:26.265035  522370 command_runner.go:130] > # 	"KILL",
	I1206 10:29:26.265038  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265046  522370 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1206 10:29:26.265056  522370 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1206 10:29:26.265061  522370 command_runner.go:130] > # add_inheritable_capabilities = false
	I1206 10:29:26.265069  522370 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1206 10:29:26.265075  522370 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 10:29:26.265088  522370 command_runner.go:130] > default_sysctls = [
	I1206 10:29:26.265093  522370 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1206 10:29:26.265096  522370 command_runner.go:130] > ]
	I1206 10:29:26.265101  522370 command_runner.go:130] > # List of devices on the host that a
	I1206 10:29:26.265110  522370 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1206 10:29:26.265114  522370 command_runner.go:130] > # allowed_devices = [
	I1206 10:29:26.265118  522370 command_runner.go:130] > # 	"/dev/fuse",
	I1206 10:29:26.265123  522370 command_runner.go:130] > # 	"/dev/net/tun",
	I1206 10:29:26.265127  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265134  522370 command_runner.go:130] > # List of additional devices. specified as
	I1206 10:29:26.265142  522370 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1206 10:29:26.265150  522370 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1206 10:29:26.265156  522370 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 10:29:26.265160  522370 command_runner.go:130] > # additional_devices = [
	I1206 10:29:26.265164  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265169  522370 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1206 10:29:26.265179  522370 command_runner.go:130] > # cdi_spec_dirs = [
	I1206 10:29:26.265184  522370 command_runner.go:130] > # 	"/etc/cdi",
	I1206 10:29:26.265188  522370 command_runner.go:130] > # 	"/var/run/cdi",
	I1206 10:29:26.265194  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265200  522370 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1206 10:29:26.265206  522370 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1206 10:29:26.265213  522370 command_runner.go:130] > # Defaults to false.
	I1206 10:29:26.265218  522370 command_runner.go:130] > # device_ownership_from_security_context = false
	I1206 10:29:26.265225  522370 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1206 10:29:26.265233  522370 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1206 10:29:26.265237  522370 command_runner.go:130] > # hooks_dir = [
	I1206 10:29:26.265245  522370 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1206 10:29:26.265248  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265264  522370 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1206 10:29:26.265271  522370 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1206 10:29:26.265277  522370 command_runner.go:130] > # its default mounts from the following two files:
	I1206 10:29:26.265282  522370 command_runner.go:130] > #
	I1206 10:29:26.265293  522370 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1206 10:29:26.265302  522370 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1206 10:29:26.265309  522370 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1206 10:29:26.265312  522370 command_runner.go:130] > #
	I1206 10:29:26.265319  522370 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1206 10:29:26.265333  522370 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1206 10:29:26.265340  522370 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1206 10:29:26.265345  522370 command_runner.go:130] > #      only add mounts it finds in this file.
	I1206 10:29:26.265351  522370 command_runner.go:130] > #
	I1206 10:29:26.265355  522370 command_runner.go:130] > # default_mounts_file = ""
	I1206 10:29:26.265360  522370 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1206 10:29:26.265367  522370 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1206 10:29:26.265371  522370 command_runner.go:130] > # pids_limit = -1
	I1206 10:29:26.265378  522370 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1206 10:29:26.265386  522370 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1206 10:29:26.265392  522370 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1206 10:29:26.265403  522370 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1206 10:29:26.265407  522370 command_runner.go:130] > # log_size_max = -1
	I1206 10:29:26.265416  522370 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1206 10:29:26.265423  522370 command_runner.go:130] > # log_to_journald = false
	I1206 10:29:26.265431  522370 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1206 10:29:26.265437  522370 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1206 10:29:26.265448  522370 command_runner.go:130] > # Path to directory for container attach sockets.
	I1206 10:29:26.265453  522370 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1206 10:29:26.265458  522370 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1206 10:29:26.265464  522370 command_runner.go:130] > # bind_mount_prefix = ""
	I1206 10:29:26.265470  522370 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1206 10:29:26.265476  522370 command_runner.go:130] > # read_only = false
	I1206 10:29:26.265482  522370 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1206 10:29:26.265491  522370 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1206 10:29:26.265495  522370 command_runner.go:130] > # live configuration reload.
	I1206 10:29:26.265508  522370 command_runner.go:130] > # log_level = "info"
	I1206 10:29:26.265514  522370 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1206 10:29:26.265523  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.265529  522370 command_runner.go:130] > # log_filter = ""
	I1206 10:29:26.265536  522370 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1206 10:29:26.265542  522370 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1206 10:29:26.265548  522370 command_runner.go:130] > # separated by comma.
	I1206 10:29:26.265557  522370 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:29:26.265564  522370 command_runner.go:130] > # uid_mappings = ""
	I1206 10:29:26.265570  522370 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1206 10:29:26.265578  522370 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1206 10:29:26.265586  522370 command_runner.go:130] > # separated by comma.
	I1206 10:29:26.265597  522370 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:29:26.265602  522370 command_runner.go:130] > # gid_mappings = ""
	I1206 10:29:26.265611  522370 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1206 10:29:26.265620  522370 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 10:29:26.265626  522370 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 10:29:26.265635  522370 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:29:26.265642  522370 command_runner.go:130] > # minimum_mappable_uid = -1
	I1206 10:29:26.265648  522370 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1206 10:29:26.265656  522370 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 10:29:26.265663  522370 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 10:29:26.265680  522370 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:29:26.265684  522370 command_runner.go:130] > # minimum_mappable_gid = -1
	I1206 10:29:26.265691  522370 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1206 10:29:26.265701  522370 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1206 10:29:26.265707  522370 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1206 10:29:26.265713  522370 command_runner.go:130] > # ctr_stop_timeout = 30
	I1206 10:29:26.265719  522370 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1206 10:29:26.265727  522370 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1206 10:29:26.265733  522370 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1206 10:29:26.265740  522370 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1206 10:29:26.265747  522370 command_runner.go:130] > # drop_infra_ctr = true
	I1206 10:29:26.265754  522370 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1206 10:29:26.265768  522370 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1206 10:29:26.265780  522370 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1206 10:29:26.265787  522370 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1206 10:29:26.265794  522370 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1206 10:29:26.265801  522370 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1206 10:29:26.265809  522370 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1206 10:29:26.265814  522370 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1206 10:29:26.265818  522370 command_runner.go:130] > # shared_cpuset = ""
	I1206 10:29:26.265824  522370 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1206 10:29:26.265832  522370 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1206 10:29:26.265838  522370 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1206 10:29:26.265846  522370 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1206 10:29:26.265857  522370 command_runner.go:130] > # pinns_path = ""
	I1206 10:29:26.265863  522370 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1206 10:29:26.265869  522370 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1206 10:29:26.265874  522370 command_runner.go:130] > # enable_criu_support = true
	I1206 10:29:26.265881  522370 command_runner.go:130] > # Enable/disable the generation of the container,
	I1206 10:29:26.265887  522370 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1206 10:29:26.265894  522370 command_runner.go:130] > # enable_pod_events = false
	I1206 10:29:26.265901  522370 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1206 10:29:26.265906  522370 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1206 10:29:26.265910  522370 command_runner.go:130] > # default_runtime = "crun"
	I1206 10:29:26.265915  522370 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1206 10:29:26.265925  522370 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1206 10:29:26.265945  522370 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1206 10:29:26.265951  522370 command_runner.go:130] > # creation as a file is not desired either.
	I1206 10:29:26.265960  522370 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1206 10:29:26.265970  522370 command_runner.go:130] > # the hostname is being managed dynamically.
	I1206 10:29:26.265974  522370 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1206 10:29:26.265977  522370 command_runner.go:130] > # ]
	I1206 10:29:26.265984  522370 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1206 10:29:26.265993  522370 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1206 10:29:26.265999  522370 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1206 10:29:26.266004  522370 command_runner.go:130] > # Each entry in the table should follow the format:
	I1206 10:29:26.266011  522370 command_runner.go:130] > #
	I1206 10:29:26.266019  522370 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1206 10:29:26.266024  522370 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1206 10:29:26.266030  522370 command_runner.go:130] > # runtime_type = "oci"
	I1206 10:29:26.266035  522370 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1206 10:29:26.266042  522370 command_runner.go:130] > # inherit_default_runtime = false
	I1206 10:29:26.266047  522370 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1206 10:29:26.266059  522370 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1206 10:29:26.266065  522370 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1206 10:29:26.266068  522370 command_runner.go:130] > # monitor_env = []
	I1206 10:29:26.266080  522370 command_runner.go:130] > # privileged_without_host_devices = false
	I1206 10:29:26.266084  522370 command_runner.go:130] > # allowed_annotations = []
	I1206 10:29:26.266090  522370 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1206 10:29:26.266094  522370 command_runner.go:130] > # no_sync_log = false
	I1206 10:29:26.266098  522370 command_runner.go:130] > # default_annotations = {}
	I1206 10:29:26.266105  522370 command_runner.go:130] > # stream_websockets = false
	I1206 10:29:26.266112  522370 command_runner.go:130] > # seccomp_profile = ""
	I1206 10:29:26.266145  522370 command_runner.go:130] > # Where:
	I1206 10:29:26.266155  522370 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1206 10:29:26.266162  522370 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1206 10:29:26.266168  522370 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1206 10:29:26.266182  522370 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1206 10:29:26.266186  522370 command_runner.go:130] > #   in $PATH.
	I1206 10:29:26.266192  522370 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1206 10:29:26.266199  522370 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1206 10:29:26.266206  522370 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1206 10:29:26.266212  522370 command_runner.go:130] > #   state.
	I1206 10:29:26.266218  522370 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1206 10:29:26.266224  522370 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1206 10:29:26.266232  522370 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1206 10:29:26.266239  522370 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1206 10:29:26.266247  522370 command_runner.go:130] > #   the values from the default runtime on load time.
	I1206 10:29:26.266254  522370 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1206 10:29:26.266265  522370 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1206 10:29:26.266275  522370 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1206 10:29:26.266283  522370 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1206 10:29:26.266287  522370 command_runner.go:130] > #   The currently recognized values are:
	I1206 10:29:26.266294  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1206 10:29:26.266304  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1206 10:29:26.266315  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1206 10:29:26.266324  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1206 10:29:26.266332  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1206 10:29:26.266339  522370 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1206 10:29:26.266348  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1206 10:29:26.266356  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1206 10:29:26.266368  522370 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1206 10:29:26.266375  522370 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1206 10:29:26.266382  522370 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1206 10:29:26.266388  522370 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1206 10:29:26.266394  522370 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1206 10:29:26.266410  522370 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1206 10:29:26.266417  522370 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1206 10:29:26.266425  522370 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1206 10:29:26.266435  522370 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1206 10:29:26.266440  522370 command_runner.go:130] > #   deprecated option "conmon".
	I1206 10:29:26.266447  522370 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1206 10:29:26.266455  522370 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1206 10:29:26.266463  522370 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1206 10:29:26.266467  522370 command_runner.go:130] > #   should be moved to the container's cgroup
	I1206 10:29:26.266475  522370 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1206 10:29:26.266479  522370 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1206 10:29:26.266489  522370 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1206 10:29:26.266501  522370 command_runner.go:130] > #   conmon-rs by using:
	I1206 10:29:26.266510  522370 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1206 10:29:26.266520  522370 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1206 10:29:26.266531  522370 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1206 10:29:26.266542  522370 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1206 10:29:26.266552  522370 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1206 10:29:26.266559  522370 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1206 10:29:26.266571  522370 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1206 10:29:26.266585  522370 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1206 10:29:26.266593  522370 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1206 10:29:26.266603  522370 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1206 10:29:26.266610  522370 command_runner.go:130] > #   when a machine crash happens.
	I1206 10:29:26.266617  522370 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1206 10:29:26.266625  522370 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1206 10:29:26.266636  522370 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1206 10:29:26.266641  522370 command_runner.go:130] > #   seccomp profile for the runtime.
	I1206 10:29:26.266647  522370 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1206 10:29:26.266656  522370 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1206 10:29:26.266660  522370 command_runner.go:130] > #
	I1206 10:29:26.266665  522370 command_runner.go:130] > # Using the seccomp notifier feature:
	I1206 10:29:26.266675  522370 command_runner.go:130] > #
	I1206 10:29:26.266682  522370 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1206 10:29:26.266689  522370 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1206 10:29:26.266694  522370 command_runner.go:130] > #
	I1206 10:29:26.266701  522370 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1206 10:29:26.266708  522370 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1206 10:29:26.266711  522370 command_runner.go:130] > #
	I1206 10:29:26.266718  522370 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1206 10:29:26.266723  522370 command_runner.go:130] > # feature.
	I1206 10:29:26.266726  522370 command_runner.go:130] > #
	I1206 10:29:26.266732  522370 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1206 10:29:26.266739  522370 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1206 10:29:26.266747  522370 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1206 10:29:26.266754  522370 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1206 10:29:26.266763  522370 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1206 10:29:26.266768  522370 command_runner.go:130] > #
	I1206 10:29:26.266774  522370 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1206 10:29:26.266786  522370 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1206 10:29:26.266792  522370 command_runner.go:130] > #
	I1206 10:29:26.266800  522370 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1206 10:29:26.266806  522370 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1206 10:29:26.266809  522370 command_runner.go:130] > #
	I1206 10:29:26.266815  522370 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1206 10:29:26.266825  522370 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1206 10:29:26.266831  522370 command_runner.go:130] > # limitation.
	I1206 10:29:26.266835  522370 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1206 10:29:26.266848  522370 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1206 10:29:26.266853  522370 command_runner.go:130] > runtime_type = ""
	I1206 10:29:26.266856  522370 command_runner.go:130] > runtime_root = "/run/crun"
	I1206 10:29:26.266862  522370 command_runner.go:130] > inherit_default_runtime = false
	I1206 10:29:26.266868  522370 command_runner.go:130] > runtime_config_path = ""
	I1206 10:29:26.266873  522370 command_runner.go:130] > container_min_memory = ""
	I1206 10:29:26.266880  522370 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1206 10:29:26.266884  522370 command_runner.go:130] > monitor_cgroup = "pod"
	I1206 10:29:26.266889  522370 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 10:29:26.266892  522370 command_runner.go:130] > allowed_annotations = [
	I1206 10:29:26.266897  522370 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1206 10:29:26.266900  522370 command_runner.go:130] > ]
	I1206 10:29:26.266904  522370 command_runner.go:130] > privileged_without_host_devices = false
	I1206 10:29:26.266911  522370 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1206 10:29:26.266916  522370 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1206 10:29:26.266921  522370 command_runner.go:130] > runtime_type = ""
	I1206 10:29:26.266932  522370 command_runner.go:130] > runtime_root = "/run/runc"
	I1206 10:29:26.266939  522370 command_runner.go:130] > inherit_default_runtime = false
	I1206 10:29:26.266943  522370 command_runner.go:130] > runtime_config_path = ""
	I1206 10:29:26.266947  522370 command_runner.go:130] > container_min_memory = ""
	I1206 10:29:26.266952  522370 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1206 10:29:26.266961  522370 command_runner.go:130] > monitor_cgroup = "pod"
	I1206 10:29:26.266966  522370 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 10:29:26.266970  522370 command_runner.go:130] > privileged_without_host_devices = false
	I1206 10:29:26.266981  522370 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1206 10:29:26.266987  522370 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1206 10:29:26.266995  522370 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1206 10:29:26.267006  522370 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1206 10:29:26.267024  522370 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1206 10:29:26.267035  522370 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1206 10:29:26.267047  522370 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1206 10:29:26.267054  522370 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1206 10:29:26.267063  522370 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1206 10:29:26.267072  522370 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1206 10:29:26.267080  522370 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1206 10:29:26.267087  522370 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1206 10:29:26.267094  522370 command_runner.go:130] > # Example:
	I1206 10:29:26.267098  522370 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1206 10:29:26.267103  522370 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1206 10:29:26.267108  522370 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1206 10:29:26.267132  522370 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1206 10:29:26.267141  522370 command_runner.go:130] > # cpuset = "0-1"
	I1206 10:29:26.267145  522370 command_runner.go:130] > # cpushares = "5"
	I1206 10:29:26.267149  522370 command_runner.go:130] > # cpuquota = "1000"
	I1206 10:29:26.267152  522370 command_runner.go:130] > # cpuperiod = "100000"
	I1206 10:29:26.267156  522370 command_runner.go:130] > # cpulimit = "35"
	I1206 10:29:26.267159  522370 command_runner.go:130] > # Where:
	I1206 10:29:26.267165  522370 command_runner.go:130] > # The workload name is workload-type.
	I1206 10:29:26.267172  522370 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1206 10:29:26.267181  522370 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1206 10:29:26.267188  522370 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1206 10:29:26.267199  522370 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1206 10:29:26.267205  522370 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1206 10:29:26.267210  522370 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1206 10:29:26.267224  522370 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1206 10:29:26.267229  522370 command_runner.go:130] > # Default value is set to true
	I1206 10:29:26.267234  522370 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1206 10:29:26.267244  522370 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1206 10:29:26.267251  522370 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1206 10:29:26.267255  522370 command_runner.go:130] > # Default value is set to 'false'
	I1206 10:29:26.267260  522370 command_runner.go:130] > # disable_hostport_mapping = false
	I1206 10:29:26.267265  522370 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1206 10:29:26.267277  522370 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1206 10:29:26.267283  522370 command_runner.go:130] > # timezone = ""
	I1206 10:29:26.267290  522370 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1206 10:29:26.267293  522370 command_runner.go:130] > #
	I1206 10:29:26.267299  522370 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1206 10:29:26.267310  522370 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1206 10:29:26.267313  522370 command_runner.go:130] > [crio.image]
	I1206 10:29:26.267319  522370 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1206 10:29:26.267324  522370 command_runner.go:130] > # default_transport = "docker://"
	I1206 10:29:26.267332  522370 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1206 10:29:26.267339  522370 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1206 10:29:26.267343  522370 command_runner.go:130] > # global_auth_file = ""
	I1206 10:29:26.267351  522370 command_runner.go:130] > # The image used to instantiate infra containers.
	I1206 10:29:26.267359  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.267364  522370 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1206 10:29:26.267378  522370 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1206 10:29:26.267385  522370 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1206 10:29:26.267396  522370 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:29:26.267401  522370 command_runner.go:130] > # pause_image_auth_file = ""
	I1206 10:29:26.267407  522370 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1206 10:29:26.267413  522370 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1206 10:29:26.267421  522370 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1206 10:29:26.267427  522370 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1206 10:29:26.267434  522370 command_runner.go:130] > # pause_command = "/pause"
	I1206 10:29:26.267440  522370 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1206 10:29:26.267447  522370 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1206 10:29:26.267455  522370 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1206 10:29:26.267461  522370 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1206 10:29:26.267471  522370 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1206 10:29:26.267480  522370 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1206 10:29:26.267484  522370 command_runner.go:130] > # pinned_images = [
	I1206 10:29:26.267488  522370 command_runner.go:130] > # ]
	I1206 10:29:26.267494  522370 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1206 10:29:26.267502  522370 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1206 10:29:26.267509  522370 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1206 10:29:26.267517  522370 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1206 10:29:26.267525  522370 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1206 10:29:26.267530  522370 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1206 10:29:26.267538  522370 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1206 10:29:26.267548  522370 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1206 10:29:26.267556  522370 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1206 10:29:26.267566  522370 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1206 10:29:26.267572  522370 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1206 10:29:26.267579  522370 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1206 10:29:26.267587  522370 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1206 10:29:26.267594  522370 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1206 10:29:26.267597  522370 command_runner.go:130] > # changing them here.
	I1206 10:29:26.267603  522370 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1206 10:29:26.267608  522370 command_runner.go:130] > # insecure_registries = [
	I1206 10:29:26.267613  522370 command_runner.go:130] > # ]
	I1206 10:29:26.267620  522370 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1206 10:29:26.267637  522370 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1206 10:29:26.267641  522370 command_runner.go:130] > # image_volumes = "mkdir"
	I1206 10:29:26.267646  522370 command_runner.go:130] > # Temporary directory to use for storing big files
	I1206 10:29:26.267671  522370 command_runner.go:130] > # big_files_temporary_dir = ""
	I1206 10:29:26.267678  522370 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1206 10:29:26.267687  522370 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1206 10:29:26.267699  522370 command_runner.go:130] > # auto_reload_registries = false
	I1206 10:29:26.267706  522370 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1206 10:29:26.267714  522370 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1206 10:29:26.267723  522370 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1206 10:29:26.267732  522370 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1206 10:29:26.267739  522370 command_runner.go:130] > # The mode of short name resolution.
	I1206 10:29:26.267746  522370 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1206 10:29:26.267753  522370 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1206 10:29:26.267758  522370 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1206 10:29:26.267766  522370 command_runner.go:130] > # short_name_mode = "enforcing"
	I1206 10:29:26.267775  522370 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1206 10:29:26.267781  522370 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1206 10:29:26.267788  522370 command_runner.go:130] > # oci_artifact_mount_support = true
	I1206 10:29:26.267795  522370 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1206 10:29:26.267798  522370 command_runner.go:130] > # CNI plugins.
	I1206 10:29:26.267802  522370 command_runner.go:130] > [crio.network]
	I1206 10:29:26.267808  522370 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1206 10:29:26.267816  522370 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1206 10:29:26.267820  522370 command_runner.go:130] > # cni_default_network = ""
	I1206 10:29:26.267826  522370 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1206 10:29:26.267836  522370 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1206 10:29:26.267842  522370 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1206 10:29:26.267845  522370 command_runner.go:130] > # plugin_dirs = [
	I1206 10:29:26.267853  522370 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1206 10:29:26.267856  522370 command_runner.go:130] > # ]
	I1206 10:29:26.267861  522370 command_runner.go:130] > # List of included pod metrics.
	I1206 10:29:26.267867  522370 command_runner.go:130] > # included_pod_metrics = [
	I1206 10:29:26.267870  522370 command_runner.go:130] > # ]
	I1206 10:29:26.267879  522370 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1206 10:29:26.267885  522370 command_runner.go:130] > [crio.metrics]
	I1206 10:29:26.267890  522370 command_runner.go:130] > # Globally enable or disable metrics support.
	I1206 10:29:26.267897  522370 command_runner.go:130] > # enable_metrics = false
	I1206 10:29:26.267902  522370 command_runner.go:130] > # Specify enabled metrics collectors.
	I1206 10:29:26.267906  522370 command_runner.go:130] > # Per default all metrics are enabled.
	I1206 10:29:26.267912  522370 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1206 10:29:26.267919  522370 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1206 10:29:26.267925  522370 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1206 10:29:26.267938  522370 command_runner.go:130] > # metrics_collectors = [
	I1206 10:29:26.267943  522370 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1206 10:29:26.267947  522370 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1206 10:29:26.267951  522370 command_runner.go:130] > # 	"containers_oom_total",
	I1206 10:29:26.267954  522370 command_runner.go:130] > # 	"processes_defunct",
	I1206 10:29:26.267958  522370 command_runner.go:130] > # 	"operations_total",
	I1206 10:29:26.267962  522370 command_runner.go:130] > # 	"operations_latency_seconds",
	I1206 10:29:26.267966  522370 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1206 10:29:26.267970  522370 command_runner.go:130] > # 	"operations_errors_total",
	I1206 10:29:26.267977  522370 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1206 10:29:26.267981  522370 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1206 10:29:26.267986  522370 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1206 10:29:26.267990  522370 command_runner.go:130] > # 	"image_pulls_success_total",
	I1206 10:29:26.267993  522370 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1206 10:29:26.267997  522370 command_runner.go:130] > # 	"containers_oom_count_total",
	I1206 10:29:26.268003  522370 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1206 10:29:26.268007  522370 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1206 10:29:26.268011  522370 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1206 10:29:26.268014  522370 command_runner.go:130] > # ]
	I1206 10:29:26.268020  522370 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1206 10:29:26.268024  522370 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1206 10:29:26.268029  522370 command_runner.go:130] > # The port on which the metrics server will listen.
	I1206 10:29:26.268032  522370 command_runner.go:130] > # metrics_port = 9090
	I1206 10:29:26.268037  522370 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1206 10:29:26.268041  522370 command_runner.go:130] > # metrics_socket = ""
	I1206 10:29:26.268046  522370 command_runner.go:130] > # The certificate for the secure metrics server.
	I1206 10:29:26.268052  522370 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1206 10:29:26.268061  522370 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1206 10:29:26.268070  522370 command_runner.go:130] > # certificate on any modification event.
	I1206 10:29:26.268074  522370 command_runner.go:130] > # metrics_cert = ""
	I1206 10:29:26.268079  522370 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1206 10:29:26.268086  522370 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1206 10:29:26.268090  522370 command_runner.go:130] > # metrics_key = ""
	I1206 10:29:26.268099  522370 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1206 10:29:26.268106  522370 command_runner.go:130] > [crio.tracing]
	I1206 10:29:26.268112  522370 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1206 10:29:26.268116  522370 command_runner.go:130] > # enable_tracing = false
	I1206 10:29:26.268121  522370 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1206 10:29:26.268127  522370 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1206 10:29:26.268135  522370 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1206 10:29:26.268143  522370 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1206 10:29:26.268147  522370 command_runner.go:130] > # CRI-O NRI configuration.
	I1206 10:29:26.268150  522370 command_runner.go:130] > [crio.nri]
	I1206 10:29:26.268155  522370 command_runner.go:130] > # Globally enable or disable NRI.
	I1206 10:29:26.268158  522370 command_runner.go:130] > # enable_nri = true
	I1206 10:29:26.268162  522370 command_runner.go:130] > # NRI socket to listen on.
	I1206 10:29:26.268166  522370 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1206 10:29:26.268170  522370 command_runner.go:130] > # NRI plugin directory to use.
	I1206 10:29:26.268174  522370 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1206 10:29:26.268181  522370 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1206 10:29:26.268187  522370 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1206 10:29:26.268195  522370 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1206 10:29:26.268252  522370 command_runner.go:130] > # nri_disable_connections = false
	I1206 10:29:26.268260  522370 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1206 10:29:26.268265  522370 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1206 10:29:26.268270  522370 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1206 10:29:26.268274  522370 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1206 10:29:26.268287  522370 command_runner.go:130] > # NRI default validator configuration.
	I1206 10:29:26.268294  522370 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1206 10:29:26.268307  522370 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1206 10:29:26.268312  522370 command_runner.go:130] > # can be restricted/rejected:
	I1206 10:29:26.268322  522370 command_runner.go:130] > # - OCI hook injection
	I1206 10:29:26.268327  522370 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1206 10:29:26.268333  522370 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1206 10:29:26.268340  522370 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1206 10:29:26.268344  522370 command_runner.go:130] > # - adjustment of linux namespaces
	I1206 10:29:26.268356  522370 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1206 10:29:26.268363  522370 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1206 10:29:26.268368  522370 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1206 10:29:26.268375  522370 command_runner.go:130] > #
	I1206 10:29:26.268380  522370 command_runner.go:130] > # [crio.nri.default_validator]
	I1206 10:29:26.268384  522370 command_runner.go:130] > # nri_enable_default_validator = false
	I1206 10:29:26.268397  522370 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1206 10:29:26.268403  522370 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1206 10:29:26.268408  522370 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1206 10:29:26.268416  522370 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1206 10:29:26.268421  522370 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1206 10:29:26.268425  522370 command_runner.go:130] > # nri_validator_required_plugins = [
	I1206 10:29:26.268431  522370 command_runner.go:130] > # ]
	I1206 10:29:26.268436  522370 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1206 10:29:26.268442  522370 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1206 10:29:26.268446  522370 command_runner.go:130] > [crio.stats]
	I1206 10:29:26.268454  522370 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1206 10:29:26.268465  522370 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1206 10:29:26.268469  522370 command_runner.go:130] > # stats_collection_period = 0
	I1206 10:29:26.268475  522370 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1206 10:29:26.268484  522370 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1206 10:29:26.268489  522370 command_runner.go:130] > # collection_period = 0
	I1206 10:29:26.268581  522370 cni.go:84] Creating CNI manager for ""
	I1206 10:29:26.268595  522370 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:29:26.268620  522370 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:29:26.268646  522370 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-123579 NodeName:functional-123579 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:29:26.268768  522370 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-123579"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:29:26.268849  522370 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:29:26.276198  522370 command_runner.go:130] > kubeadm
	I1206 10:29:26.276217  522370 command_runner.go:130] > kubectl
	I1206 10:29:26.276221  522370 command_runner.go:130] > kubelet
	I1206 10:29:26.277128  522370 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:29:26.277245  522370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:29:26.285085  522370 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:29:26.297894  522370 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:29:26.310811  522370 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1206 10:29:26.323875  522370 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:29:26.327560  522370 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1206 10:29:26.327877  522370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:29:26.463333  522370 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:29:27.181623  522370 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579 for IP: 192.168.49.2
	I1206 10:29:27.181646  522370 certs.go:195] generating shared ca certs ...
	I1206 10:29:27.181662  522370 certs.go:227] acquiring lock for ca certs: {Name:mk654f77abd8383620ce6ddae56f2a6a8c1d96d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:29:27.181794  522370 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key
	I1206 10:29:27.181841  522370 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key
	I1206 10:29:27.181855  522370 certs.go:257] generating profile certs ...
	I1206 10:29:27.181981  522370 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key
	I1206 10:29:27.182049  522370 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key.fda7c087
	I1206 10:29:27.182120  522370 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key
	I1206 10:29:27.182139  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 10:29:27.182178  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 10:29:27.182195  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 10:29:27.182206  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 10:29:27.182221  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 10:29:27.182231  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 10:29:27.182242  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 10:29:27.182252  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 10:29:27.182310  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem (1338 bytes)
	W1206 10:29:27.182343  522370 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068_empty.pem, impossibly tiny 0 bytes
	I1206 10:29:27.182351  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 10:29:27.182391  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:29:27.182420  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:29:27.182445  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem (1675 bytes)
	I1206 10:29:27.182502  522370 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:29:27.182537  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.182553  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem -> /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.182567  522370 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.183155  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:29:27.204776  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:29:27.223807  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:29:27.246828  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 10:29:27.269763  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:29:27.290536  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:29:27.308147  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:29:27.326269  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:29:27.344314  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:29:27.361949  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem --> /usr/share/ca-certificates/488068.pem (1338 bytes)
	I1206 10:29:27.379296  522370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /usr/share/ca-certificates/4880682.pem (1708 bytes)
	I1206 10:29:27.396825  522370 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:29:27.409539  522370 ssh_runner.go:195] Run: openssl version
	I1206 10:29:27.415501  522370 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1206 10:29:27.415885  522370 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.423483  522370 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488068.pem /etc/ssl/certs/488068.pem
	I1206 10:29:27.431381  522370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.435336  522370 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.435420  522370 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.435491  522370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488068.pem
	I1206 10:29:27.477997  522370 command_runner.go:130] > 51391683
	I1206 10:29:27.478450  522370 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:29:27.485910  522370 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.493199  522370 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4880682.pem /etc/ssl/certs/4880682.pem
	I1206 10:29:27.500533  522370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.504197  522370 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.504254  522370 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.504314  522370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4880682.pem
	I1206 10:29:27.549795  522370 command_runner.go:130] > 3ec20f2e
	I1206 10:29:27.550294  522370 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:29:27.557856  522370 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.565301  522370 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:29:27.572772  522370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.576768  522370 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.576853  522370 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.576925  522370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:29:27.618106  522370 command_runner.go:130] > b5213941
	I1206 10:29:27.618536  522370 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:29:27.626130  522370 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:29:27.629702  522370 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:29:27.629728  522370 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1206 10:29:27.629736  522370 command_runner.go:130] > Device: 259,1	Inode: 3640487     Links: 1
	I1206 10:29:27.629742  522370 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:29:27.629749  522370 command_runner.go:130] > Access: 2025-12-06 10:25:18.913466133 +0000
	I1206 10:29:27.629754  522370 command_runner.go:130] > Modify: 2025-12-06 10:21:14.154593310 +0000
	I1206 10:29:27.629758  522370 command_runner.go:130] > Change: 2025-12-06 10:21:14.154593310 +0000
	I1206 10:29:27.629764  522370 command_runner.go:130] >  Birth: 2025-12-06 10:21:14.154593310 +0000
	I1206 10:29:27.629823  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:29:27.670498  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.670941  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:29:27.711871  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.712351  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:29:27.753204  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.753665  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:29:27.795554  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.796089  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:29:27.836809  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.837203  522370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:29:27.878291  522370 command_runner.go:130] > Certificate will not expire
	I1206 10:29:27.878357  522370 kubeadm.go:401] StartCluster: {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:29:27.878433  522370 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:29:27.878503  522370 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:29:27.905835  522370 cri.go:89] found id: ""
	I1206 10:29:27.905910  522370 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:29:27.912750  522370 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 10:29:27.912773  522370 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 10:29:27.912780  522370 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 10:29:27.913690  522370 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:29:27.913706  522370 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:29:27.913783  522370 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:29:27.921335  522370 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:29:27.921755  522370 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-123579" does not appear in /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:27.921867  522370 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-484819/kubeconfig needs updating (will repair): [kubeconfig missing "functional-123579" cluster setting kubeconfig missing "functional-123579" context setting]
	I1206 10:29:27.922200  522370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/kubeconfig: {Name:mk884a72161ed5cd0cfdbffc4a21f277282d705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:29:27.922608  522370 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:27.922766  522370 kapi.go:59] client config for functional-123579: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key", CAFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:29:27.923311  522370 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 10:29:27.923332  522370 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 10:29:27.923338  522370 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 10:29:27.923344  522370 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 10:29:27.923348  522370 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 10:29:27.923710  522370 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:29:27.923805  522370 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1206 10:29:27.932172  522370 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1206 10:29:27.932206  522370 kubeadm.go:602] duration metric: took 18.493373ms to restartPrimaryControlPlane
	I1206 10:29:27.932216  522370 kubeadm.go:403] duration metric: took 53.86688ms to StartCluster
	I1206 10:29:27.932230  522370 settings.go:142] acquiring lock: {Name:mk7eec112652eae38dac4afce804445d9092bd29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:29:27.932300  522370 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:27.932906  522370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/kubeconfig: {Name:mk884a72161ed5cd0cfdbffc4a21f277282d705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:29:27.933111  522370 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:29:27.933400  522370 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:29:27.933457  522370 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:29:27.933598  522370 addons.go:70] Setting storage-provisioner=true in profile "functional-123579"
	I1206 10:29:27.933615  522370 addons.go:239] Setting addon storage-provisioner=true in "functional-123579"
	I1206 10:29:27.933640  522370 host.go:66] Checking if "functional-123579" exists ...
	I1206 10:29:27.933662  522370 addons.go:70] Setting default-storageclass=true in profile "functional-123579"
	I1206 10:29:27.933709  522370 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-123579"
	I1206 10:29:27.934067  522370 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:29:27.934105  522370 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:29:27.937180  522370 out.go:179] * Verifying Kubernetes components...
	I1206 10:29:27.943300  522370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:29:27.955394  522370 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:29:27.955630  522370 kapi.go:59] client config for functional-123579: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key", CAFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:29:27.955941  522370 addons.go:239] Setting addon default-storageclass=true in "functional-123579"
	I1206 10:29:27.955970  522370 host.go:66] Checking if "functional-123579" exists ...
	I1206 10:29:27.956408  522370 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:29:27.980014  522370 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:29:27.983923  522370 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:27.983954  522370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:29:27.984026  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:27.996144  522370 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:27.996165  522370 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:29:27.996228  522370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:29:28.024613  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:28.044906  522370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:29:28.158003  522370 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:29:28.171055  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:28.191069  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:28.930363  522370 node_ready.go:35] waiting up to 6m0s for node "functional-123579" to be "Ready" ...
	I1206 10:29:28.930490  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:28.930625  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:28.930666  522370 retry.go:31] will retry after 220.153302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:28.930749  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:28.930787  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:28.930813  522370 retry.go:31] will retry after 205.296978ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:28.930893  522370 type.go:168] "Request Body" body=""
	I1206 10:29:28.930961  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:28.931278  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:29.136761  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:29.151269  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:29.213820  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:29.217541  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.217581  522370 retry.go:31] will retry after 414.855546ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.235243  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:29.235363  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.235412  522370 retry.go:31] will retry after 542.074768ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.431607  522370 type.go:168] "Request Body" body=""
	I1206 10:29:29.431755  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:29.432098  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:29.633557  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:29.704871  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:29.715208  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.715276  522370 retry.go:31] will retry after 512.072151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.778572  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:29.842567  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:29.842631  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.842656  522370 retry.go:31] will retry after 453.896864ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:29.930817  522370 type.go:168] "Request Body" body=""
	I1206 10:29:29.930917  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:29.931386  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:30.227644  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:30.292361  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:30.292404  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:30.292441  522370 retry.go:31] will retry after 965.22043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:30.297573  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:30.354035  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:30.357760  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:30.357796  522370 retry.go:31] will retry after 830.21573ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:30.430970  522370 type.go:168] "Request Body" body=""
	I1206 10:29:30.431039  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:30.431358  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:30.930753  522370 type.go:168] "Request Body" body=""
	I1206 10:29:30.930859  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:30.931201  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:30.931272  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:31.188810  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:31.258540  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:31.280251  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:31.280382  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:31.280411  522370 retry.go:31] will retry after 670.25639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:31.331402  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:31.331517  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:31.331545  522370 retry.go:31] will retry after 1.065706699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:31.430665  522370 type.go:168] "Request Body" body=""
	I1206 10:29:31.430772  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:31.431166  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:31.930712  522370 type.go:168] "Request Body" body=""
	I1206 10:29:31.930893  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:31.931401  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:31.951563  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:32.028942  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:32.028998  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:32.029018  522370 retry.go:31] will retry after 2.122665166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:32.397466  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:32.431043  522370 type.go:168] "Request Body" body=""
	I1206 10:29:32.431193  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:32.431584  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:32.458856  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:32.458892  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:32.458911  522370 retry.go:31] will retry after 1.728877951s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:32.931628  522370 type.go:168] "Request Body" body=""
	I1206 10:29:32.931705  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:32.932104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:32.932161  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:33.430893  522370 type.go:168] "Request Body" body=""
	I1206 10:29:33.430960  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:33.431324  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:33.930780  522370 type.go:168] "Request Body" body=""
	I1206 10:29:33.930858  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:33.931279  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:34.152755  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:34.188350  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:34.249027  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:34.249069  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:34.249090  522370 retry.go:31] will retry after 3.684646027s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:34.294198  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:34.294244  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:34.294296  522370 retry.go:31] will retry after 1.427612825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:34.431504  522370 type.go:168] "Request Body" body=""
	I1206 10:29:34.431583  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:34.431952  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:34.930685  522370 type.go:168] "Request Body" body=""
	I1206 10:29:34.930753  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:34.931043  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:35.430737  522370 type.go:168] "Request Body" body=""
	I1206 10:29:35.430834  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:35.431191  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:35.431258  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:35.722778  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:35.786215  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:35.786258  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:35.786277  522370 retry.go:31] will retry after 5.772571648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:35.931559  522370 type.go:168] "Request Body" body=""
	I1206 10:29:35.931640  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:35.931966  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:36.431586  522370 type.go:168] "Request Body" body=""
	I1206 10:29:36.431654  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:36.431914  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:36.930676  522370 type.go:168] "Request Body" body=""
	I1206 10:29:36.930756  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:36.931086  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:37.430781  522370 type.go:168] "Request Body" body=""
	I1206 10:29:37.430858  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:37.431219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:37.931472  522370 type.go:168] "Request Body" body=""
	I1206 10:29:37.931560  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:37.931882  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:37.931937  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:37.934240  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:38.012005  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:38.012049  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:38.012071  522370 retry.go:31] will retry after 2.264254307s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:38.430647  522370 type.go:168] "Request Body" body=""
	I1206 10:29:38.430724  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:38.431052  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:38.930775  522370 type.go:168] "Request Body" body=""
	I1206 10:29:38.930848  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:38.931203  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:39.430809  522370 type.go:168] "Request Body" body=""
	I1206 10:29:39.430884  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:39.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:39.930814  522370 type.go:168] "Request Body" body=""
	I1206 10:29:39.930888  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:39.931197  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:40.276629  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:40.338233  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:40.338274  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:40.338294  522370 retry.go:31] will retry after 6.465617702s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:40.431489  522370 type.go:168] "Request Body" body=""
	I1206 10:29:40.431563  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:40.431893  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:40.431948  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:40.931681  522370 type.go:168] "Request Body" body=""
	I1206 10:29:40.931758  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:40.932017  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:41.430778  522370 type.go:168] "Request Body" body=""
	I1206 10:29:41.430862  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:41.431219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:41.559542  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:41.618815  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:41.618852  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:41.618871  522370 retry.go:31] will retry after 5.212992024s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:41.931382  522370 type.go:168] "Request Body" body=""
	I1206 10:29:41.931461  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:41.931787  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:42.431525  522370 type.go:168] "Request Body" body=""
	I1206 10:29:42.431601  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:42.431866  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:42.931428  522370 type.go:168] "Request Body" body=""
	I1206 10:29:42.931503  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:42.931826  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:42.931883  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:43.431618  522370 type.go:168] "Request Body" body=""
	I1206 10:29:43.431692  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:43.432027  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:43.931348  522370 type.go:168] "Request Body" body=""
	I1206 10:29:43.931423  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:43.931690  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:44.431562  522370 type.go:168] "Request Body" body=""
	I1206 10:29:44.431652  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:44.431999  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:44.930672  522370 type.go:168] "Request Body" body=""
	I1206 10:29:44.930749  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:44.931083  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:45.430826  522370 type.go:168] "Request Body" body=""
	I1206 10:29:45.430904  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:45.431191  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:45.431243  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:45.930931  522370 type.go:168] "Request Body" body=""
	I1206 10:29:45.931023  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:45.931426  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:46.430763  522370 type.go:168] "Request Body" body=""
	I1206 10:29:46.430842  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:46.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:46.804868  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:46.832399  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:46.865940  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:46.865975  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:46.865994  522370 retry.go:31] will retry after 4.982943882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:46.906567  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:46.906612  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:46.906632  522370 retry.go:31] will retry after 5.755281988s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:46.930748  522370 type.go:168] "Request Body" body=""
	I1206 10:29:46.930817  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:46.931156  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:47.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:29:47.430851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:47.431185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:47.931383  522370 type.go:168] "Request Body" body=""
	I1206 10:29:47.931460  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:47.931792  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:47.931843  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:48.431576  522370 type.go:168] "Request Body" body=""
	I1206 10:29:48.431652  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:48.431909  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:48.931675  522370 type.go:168] "Request Body" body=""
	I1206 10:29:48.931755  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:48.932083  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:49.430777  522370 type.go:168] "Request Body" body=""
	I1206 10:29:49.430862  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:49.431211  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:49.930899  522370 type.go:168] "Request Body" body=""
	I1206 10:29:49.930969  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:49.931292  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:50.430989  522370 type.go:168] "Request Body" body=""
	I1206 10:29:50.431065  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:50.431426  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:50.431484  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:50.930772  522370 type.go:168] "Request Body" body=""
	I1206 10:29:50.930857  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:50.931213  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:51.430766  522370 type.go:168] "Request Body" body=""
	I1206 10:29:51.430838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:51.431095  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:51.849751  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:29:51.909824  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:51.909861  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:51.909882  522370 retry.go:31] will retry after 17.161477779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:51.930951  522370 type.go:168] "Request Body" body=""
	I1206 10:29:51.931035  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:51.931342  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:52.431051  522370 type.go:168] "Request Body" body=""
	I1206 10:29:52.431146  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:52.431458  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:52.431512  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:52.663117  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:29:52.730608  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:29:52.730656  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:52.730678  522370 retry.go:31] will retry after 12.860735555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:29:52.931180  522370 type.go:168] "Request Body" body=""
	I1206 10:29:52.931254  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:52.931513  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:53.431586  522370 type.go:168] "Request Body" body=""
	I1206 10:29:53.431665  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:53.432017  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:53.930759  522370 type.go:168] "Request Body" body=""
	I1206 10:29:53.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:53.931169  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:54.430719  522370 type.go:168] "Request Body" body=""
	I1206 10:29:54.430787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:54.431095  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:54.930744  522370 type.go:168] "Request Body" body=""
	I1206 10:29:54.930824  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:54.931164  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:54.931216  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:55.430912  522370 type.go:168] "Request Body" body=""
	I1206 10:29:55.430990  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:55.431336  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:55.930734  522370 type.go:168] "Request Body" body=""
	I1206 10:29:55.930815  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:55.931104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:56.430752  522370 type.go:168] "Request Body" body=""
	I1206 10:29:56.430830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:56.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:56.930913  522370 type.go:168] "Request Body" body=""
	I1206 10:29:56.931011  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:56.931387  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:56.931449  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:57.430732  522370 type.go:168] "Request Body" body=""
	I1206 10:29:57.430809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:57.431149  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:57.931360  522370 type.go:168] "Request Body" body=""
	I1206 10:29:57.931442  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:57.931792  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:58.431399  522370 type.go:168] "Request Body" body=""
	I1206 10:29:58.431472  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:58.431799  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:58.931551  522370 type.go:168] "Request Body" body=""
	I1206 10:29:58.931619  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:58.931871  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:29:58.931909  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:29:59.431652  522370 type.go:168] "Request Body" body=""
	I1206 10:29:59.431735  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:59.432062  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:29:59.930743  522370 type.go:168] "Request Body" body=""
	I1206 10:29:59.930819  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:29:59.931185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:00.449420  522370 type.go:168] "Request Body" body=""
	I1206 10:30:00.449497  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:00.449815  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:00.931632  522370 type.go:168] "Request Body" body=""
	I1206 10:30:00.931721  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:00.932114  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:00.932186  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:01.430891  522370 type.go:168] "Request Body" body=""
	I1206 10:30:01.430971  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:01.431362  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:01.930910  522370 type.go:168] "Request Body" body=""
	I1206 10:30:01.930981  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:01.931281  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:02.430983  522370 type.go:168] "Request Body" body=""
	I1206 10:30:02.431111  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:02.431463  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:02.931309  522370 type.go:168] "Request Body" body=""
	I1206 10:30:02.931390  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:02.931736  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:03.431532  522370 type.go:168] "Request Body" body=""
	I1206 10:30:03.431608  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:03.431873  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:03.431923  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:03.931662  522370 type.go:168] "Request Body" body=""
	I1206 10:30:03.931740  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:03.932084  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:04.430758  522370 type.go:168] "Request Body" body=""
	I1206 10:30:04.430838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:04.431226  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:04.930979  522370 type.go:168] "Request Body" body=""
	I1206 10:30:04.931048  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:04.931324  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:05.430768  522370 type.go:168] "Request Body" body=""
	I1206 10:30:05.430842  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:05.431235  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:05.591568  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:05.650107  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:05.653722  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:05.653756  522370 retry.go:31] will retry after 16.31009922s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:05.931225  522370 type.go:168] "Request Body" body=""
	I1206 10:30:05.931303  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:05.931640  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:05.931697  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:06.431453  522370 type.go:168] "Request Body" body=""
	I1206 10:30:06.431523  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:06.431774  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:06.931557  522370 type.go:168] "Request Body" body=""
	I1206 10:30:06.931629  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:06.931951  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:07.431619  522370 type.go:168] "Request Body" body=""
	I1206 10:30:07.431700  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:07.432067  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:07.931280  522370 type.go:168] "Request Body" body=""
	I1206 10:30:07.931358  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:07.931625  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:08.431487  522370 type.go:168] "Request Body" body=""
	I1206 10:30:08.431561  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:08.431928  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:08.431989  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:08.930675  522370 type.go:168] "Request Body" body=""
	I1206 10:30:08.930751  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:08.931076  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:09.072554  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:09.131495  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:09.131531  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:09.131550  522370 retry.go:31] will retry after 16.873374267s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:09.430840  522370 type.go:168] "Request Body" body=""
	I1206 10:30:09.430908  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:09.431218  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:09.930794  522370 type.go:168] "Request Body" body=""
	I1206 10:30:09.930868  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:09.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:10.430728  522370 type.go:168] "Request Body" body=""
	I1206 10:30:10.430802  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:10.431168  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:10.930730  522370 type.go:168] "Request Body" body=""
	I1206 10:30:10.930805  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:10.931062  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:10.931111  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:11.430884  522370 type.go:168] "Request Body" body=""
	I1206 10:30:11.430959  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:11.431276  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:11.930807  522370 type.go:168] "Request Body" body=""
	I1206 10:30:11.930877  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:11.931199  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:12.430821  522370 type.go:168] "Request Body" body=""
	I1206 10:30:12.430897  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:12.431230  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:12.931320  522370 type.go:168] "Request Body" body=""
	I1206 10:30:12.931390  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:12.931738  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:12.931801  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:13.431588  522370 type.go:168] "Request Body" body=""
	I1206 10:30:13.431660  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:13.432007  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:13.930697  522370 type.go:168] "Request Body" body=""
	I1206 10:30:13.930795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:13.931074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:14.430876  522370 type.go:168] "Request Body" body=""
	I1206 10:30:14.430958  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:14.431286  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:14.930809  522370 type.go:168] "Request Body" body=""
	I1206 10:30:14.930888  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:14.931234  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:15.430953  522370 type.go:168] "Request Body" body=""
	I1206 10:30:15.431021  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:15.431299  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:15.431359  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:15.930760  522370 type.go:168] "Request Body" body=""
	I1206 10:30:15.930854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:15.931202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:16.430790  522370 type.go:168] "Request Body" body=""
	I1206 10:30:16.430862  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:16.431183  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:16.930736  522370 type.go:168] "Request Body" body=""
	I1206 10:30:16.930809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:16.931077  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:17.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:30:17.430824  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:17.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:17.931237  522370 type.go:168] "Request Body" body=""
	I1206 10:30:17.931314  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:17.931645  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:17.931700  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:18.431393  522370 type.go:168] "Request Body" body=""
	I1206 10:30:18.431479  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:18.431748  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:18.931581  522370 type.go:168] "Request Body" body=""
	I1206 10:30:18.931653  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:18.931971  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:19.430702  522370 type.go:168] "Request Body" body=""
	I1206 10:30:19.430780  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:19.431097  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:19.930811  522370 type.go:168] "Request Body" body=""
	I1206 10:30:19.930888  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:19.931178  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:20.430768  522370 type.go:168] "Request Body" body=""
	I1206 10:30:20.430839  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:20.431197  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:20.431259  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:20.930943  522370 type.go:168] "Request Body" body=""
	I1206 10:30:20.931019  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:20.931387  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:21.431075  522370 type.go:168] "Request Body" body=""
	I1206 10:30:21.431159  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:21.431476  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:21.930790  522370 type.go:168] "Request Body" body=""
	I1206 10:30:21.930867  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:21.931207  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:21.964425  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:22.031284  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:22.031334  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:22.031356  522370 retry.go:31] will retry after 35.791693435s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:22.430787  522370 type.go:168] "Request Body" body=""
	I1206 10:30:22.430867  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:22.431181  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:22.930968  522370 type.go:168] "Request Body" body=""
	I1206 10:30:22.931043  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:22.931326  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:22.931374  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:23.430789  522370 type.go:168] "Request Body" body=""
	I1206 10:30:23.430884  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:23.431214  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:23.930931  522370 type.go:168] "Request Body" body=""
	I1206 10:30:23.931004  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:23.931354  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:24.430922  522370 type.go:168] "Request Body" body=""
	I1206 10:30:24.430996  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:24.431280  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:24.930769  522370 type.go:168] "Request Body" body=""
	I1206 10:30:24.930844  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:24.931166  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:25.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:30:25.430829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:25.431168  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:25.431230  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:25.931005  522370 type.go:168] "Request Body" body=""
	I1206 10:30:25.931194  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:25.932226  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:26.005763  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:26.074782  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:26.074834  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:26.074855  522370 retry.go:31] will retry after 34.92165894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:26.431288  522370 type.go:168] "Request Body" body=""
	I1206 10:30:26.431390  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:26.431714  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:26.931353  522370 type.go:168] "Request Body" body=""
	I1206 10:30:26.931426  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:26.931758  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:27.431397  522370 type.go:168] "Request Body" body=""
	I1206 10:30:27.431473  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:27.431770  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:27.431821  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:27.931640  522370 type.go:168] "Request Body" body=""
	I1206 10:30:27.931715  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:27.932047  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:28.431697  522370 type.go:168] "Request Body" body=""
	I1206 10:30:28.431771  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:28.432103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:28.930727  522370 type.go:168] "Request Body" body=""
	I1206 10:30:28.930800  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:28.931097  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:29.430756  522370 type.go:168] "Request Body" body=""
	I1206 10:30:29.430856  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:29.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:29.930774  522370 type.go:168] "Request Body" body=""
	I1206 10:30:29.930850  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:29.931176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:29.931223  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:30.430841  522370 type.go:168] "Request Body" body=""
	I1206 10:30:30.430907  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:30.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:30.930749  522370 type.go:168] "Request Body" body=""
	I1206 10:30:30.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:30.931181  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:31.430916  522370 type.go:168] "Request Body" body=""
	I1206 10:30:31.431010  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:31.431428  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:31.931099  522370 type.go:168] "Request Body" body=""
	I1206 10:30:31.931194  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:31.931454  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:31.931504  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:32.431294  522370 type.go:168] "Request Body" body=""
	I1206 10:30:32.431377  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:32.431741  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:32.931507  522370 type.go:168] "Request Body" body=""
	I1206 10:30:32.931587  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:32.931910  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:33.431612  522370 type.go:168] "Request Body" body=""
	I1206 10:30:33.431689  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:33.431967  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:33.930699  522370 type.go:168] "Request Body" body=""
	I1206 10:30:33.930774  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:33.931115  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:34.430875  522370 type.go:168] "Request Body" body=""
	I1206 10:30:34.430956  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:34.431328  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:34.431399  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:34.930728  522370 type.go:168] "Request Body" body=""
	I1206 10:30:34.930826  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:34.931100  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:35.430769  522370 type.go:168] "Request Body" body=""
	I1206 10:30:35.430844  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:35.431198  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:35.930924  522370 type.go:168] "Request Body" body=""
	I1206 10:30:35.931010  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:35.931368  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:36.431048  522370 type.go:168] "Request Body" body=""
	I1206 10:30:36.431167  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:36.431482  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:36.431535  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:36.931272  522370 type.go:168] "Request Body" body=""
	I1206 10:30:36.931345  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:36.931668  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:37.431458  522370 type.go:168] "Request Body" body=""
	I1206 10:30:37.431553  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:37.431867  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:37.931612  522370 type.go:168] "Request Body" body=""
	I1206 10:30:37.931682  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:37.932028  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:38.430754  522370 type.go:168] "Request Body" body=""
	I1206 10:30:38.430831  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:38.431203  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:38.930759  522370 type.go:168] "Request Body" body=""
	I1206 10:30:38.930834  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:38.931173  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:38.931244  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:39.430719  522370 type.go:168] "Request Body" body=""
	I1206 10:30:39.430798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:39.431104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:39.930841  522370 type.go:168] "Request Body" body=""
	I1206 10:30:39.930938  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:39.931315  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:40.431028  522370 type.go:168] "Request Body" body=""
	I1206 10:30:40.431104  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:40.431481  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:40.931230  522370 type.go:168] "Request Body" body=""
	I1206 10:30:40.931298  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:40.931552  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:40.931592  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:41.431348  522370 type.go:168] "Request Body" body=""
	I1206 10:30:41.431446  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:41.431767  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:41.931566  522370 type.go:168] "Request Body" body=""
	I1206 10:30:41.931647  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:41.931976  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:42.431636  522370 type.go:168] "Request Body" body=""
	I1206 10:30:42.431716  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:42.431988  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:42.930987  522370 type.go:168] "Request Body" body=""
	I1206 10:30:42.931066  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:42.931431  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:43.431209  522370 type.go:168] "Request Body" body=""
	I1206 10:30:43.431287  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:43.431648  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:43.431703  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:43.931389  522370 type.go:168] "Request Body" body=""
	I1206 10:30:43.931457  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:43.931727  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:44.431509  522370 type.go:168] "Request Body" body=""
	I1206 10:30:44.431583  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:44.431898  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:44.930652  522370 type.go:168] "Request Body" body=""
	I1206 10:30:44.930726  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:44.931043  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:45.430750  522370 type.go:168] "Request Body" body=""
	I1206 10:30:45.430832  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:45.431185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:45.930735  522370 type.go:168] "Request Body" body=""
	I1206 10:30:45.930816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:45.931167  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:45.931245  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:46.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:30:46.430992  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:46.431364  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:46.930735  522370 type.go:168] "Request Body" body=""
	I1206 10:30:46.930830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:46.931154  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:47.430792  522370 type.go:168] "Request Body" body=""
	I1206 10:30:47.430873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:47.431273  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:47.931290  522370 type.go:168] "Request Body" body=""
	I1206 10:30:47.931389  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:47.931707  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:47.931764  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:48.431531  522370 type.go:168] "Request Body" body=""
	I1206 10:30:48.431600  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:48.431884  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:48.931635  522370 type.go:168] "Request Body" body=""
	I1206 10:30:48.931707  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:48.932051  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:49.430636  522370 type.go:168] "Request Body" body=""
	I1206 10:30:49.430720  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:49.431043  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:49.930721  522370 type.go:168] "Request Body" body=""
	I1206 10:30:49.930793  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:49.931074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:50.430687  522370 type.go:168] "Request Body" body=""
	I1206 10:30:50.430783  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:50.431076  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:50.431162  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:50.930764  522370 type.go:168] "Request Body" body=""
	I1206 10:30:50.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:50.931221  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.430755  522370 type.go:168] "Request Body" body=""
	I1206 10:30:51.430826  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:51.431099  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.930829  522370 type.go:168] "Request Body" body=""
	I1206 10:30:51.930912  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:51.931261  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:52.430981  522370 type.go:168] "Request Body" body=""
	I1206 10:30:52.431081  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:52.431382  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:52.431432  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:52.931312  522370 type.go:168] "Request Body" body=""
	I1206 10:30:52.931405  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:52.931664  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.430694  522370 type.go:168] "Request Body" body=""
	I1206 10:30:53.430779  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:53.431113  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.930852  522370 type.go:168] "Request Body" body=""
	I1206 10:30:53.930925  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:53.931259  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:54.430827  522370 type.go:168] "Request Body" body=""
	I1206 10:30:54.430913  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:54.431229  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:54.930765  522370 type.go:168] "Request Body" body=""
	I1206 10:30:54.930847  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:54.931199  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:54.931254  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:55.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:30:55.431006  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:55.431312  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:55.930988  522370 type.go:168] "Request Body" body=""
	I1206 10:30:55.931078  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:55.931370  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:56.430800  522370 type.go:168] "Request Body" body=""
	I1206 10:30:56.430873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:56.431230  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:56.930928  522370 type.go:168] "Request Body" body=""
	I1206 10:30:56.931021  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:56.931336  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:56.931382  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:57.430743  522370 type.go:168] "Request Body" body=""
	I1206 10:30:57.430812  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:57.431182  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:57.823985  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:57.887311  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:57.891368  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:57.891481  522370 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:30:57.930973  522370 type.go:168] "Request Body" body=""
	I1206 10:30:57.931045  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:57.931345  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:58.430767  522370 type.go:168] "Request Body" body=""
	I1206 10:30:58.430847  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:58.431185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:58.930711  522370 type.go:168] "Request Body" body=""
	I1206 10:30:58.930784  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:58.931072  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:59.430808  522370 type.go:168] "Request Body" body=""
	I1206 10:30:59.430894  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:59.431255  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:59.431320  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:59.930807  522370 type.go:168] "Request Body" body=""
	I1206 10:30:59.930882  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:59.931248  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:00.430992  522370 type.go:168] "Request Body" body=""
	I1206 10:31:00.431085  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:00.431404  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:00.930779  522370 type.go:168] "Request Body" body=""
	I1206 10:31:00.930858  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:00.931174  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:00.997513  522370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:01.064863  522370 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:01.068488  522370 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:01.068586  522370 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:31:01.073496  522370 out.go:179] * Enabled addons: 
	I1206 10:31:01.076263  522370 addons.go:530] duration metric: took 1m33.142805076s for enable addons: enabled=[]
	I1206 10:31:01.430965  522370 type.go:168] "Request Body" body=""
	I1206 10:31:01.431062  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:01.431429  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:01.431491  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:01.930728  522370 type.go:168] "Request Body" body=""
	I1206 10:31:01.930813  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:01.931075  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:02.430719  522370 type.go:168] "Request Body" body=""
	I1206 10:31:02.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:02.431170  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:02.931199  522370 type.go:168] "Request Body" body=""
	I1206 10:31:02.931311  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:02.931626  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:03.431408  522370 type.go:168] "Request Body" body=""
	I1206 10:31:03.431503  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:03.431775  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:03.431826  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:03.931635  522370 type.go:168] "Request Body" body=""
	I1206 10:31:03.931714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:03.932077  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:04.430812  522370 type.go:168] "Request Body" body=""
	I1206 10:31:04.430889  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:04.431222  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:04.930928  522370 type.go:168] "Request Body" body=""
	I1206 10:31:04.931001  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:04.931294  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:05.430732  522370 type.go:168] "Request Body" body=""
	I1206 10:31:05.430807  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:05.431205  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:05.930777  522370 type.go:168] "Request Body" body=""
	I1206 10:31:05.930859  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:05.931245  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:05.931317  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:06.430961  522370 type.go:168] "Request Body" body=""
	I1206 10:31:06.431031  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:06.431335  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:06.930771  522370 type.go:168] "Request Body" body=""
	I1206 10:31:06.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:06.931212  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:07.430742  522370 type.go:168] "Request Body" body=""
	I1206 10:31:07.430822  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:07.431109  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:07.931277  522370 type.go:168] "Request Body" body=""
	I1206 10:31:07.931353  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:07.931638  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:07.931679  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:08.431521  522370 type.go:168] "Request Body" body=""
	I1206 10:31:08.431597  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:08.431952  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:08.930701  522370 type.go:168] "Request Body" body=""
	I1206 10:31:08.930775  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:08.931170  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:09.430860  522370 type.go:168] "Request Body" body=""
	I1206 10:31:09.430943  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:09.431243  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:09.930955  522370 type.go:168] "Request Body" body=""
	I1206 10:31:09.931035  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:09.931420  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:10.430778  522370 type.go:168] "Request Body" body=""
	I1206 10:31:10.430853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:10.431208  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:10.431264  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:10.930903  522370 type.go:168] "Request Body" body=""
	I1206 10:31:10.930972  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:10.931257  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:11.430753  522370 type.go:168] "Request Body" body=""
	I1206 10:31:11.430831  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:11.431176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:11.930888  522370 type.go:168] "Request Body" body=""
	I1206 10:31:11.930965  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:11.931366  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:12.431043  522370 type.go:168] "Request Body" body=""
	I1206 10:31:12.431118  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:12.431399  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:12.431444  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:12.931357  522370 type.go:168] "Request Body" body=""
	I1206 10:31:12.931433  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:12.931800  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:13.431602  522370 type.go:168] "Request Body" body=""
	I1206 10:31:13.431680  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:13.432016  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:13.930772  522370 type.go:168] "Request Body" body=""
	I1206 10:31:13.930841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:13.931103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.430796  522370 type.go:168] "Request Body" body=""
	I1206 10:31:14.430893  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:14.431217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.930770  522370 type.go:168] "Request Body" body=""
	I1206 10:31:14.930849  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:14.931219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:14.931279  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:15.430782  522370 type.go:168] "Request Body" body=""
	I1206 10:31:15.430850  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:15.431157  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:15.930757  522370 type.go:168] "Request Body" body=""
	I1206 10:31:15.930829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:15.931193  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:16.430753  522370 type.go:168] "Request Body" body=""
	I1206 10:31:16.430830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:16.431177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:16.930737  522370 type.go:168] "Request Body" body=""
	I1206 10:31:16.930808  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:16.931093  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:17.430742  522370 type.go:168] "Request Body" body=""
	I1206 10:31:17.430824  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:17.431217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:17.431272  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:17.931342  522370 type.go:168] "Request Body" body=""
	I1206 10:31:17.931425  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:17.931778  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:18.431537  522370 type.go:168] "Request Body" body=""
	I1206 10:31:18.431605  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:18.431868  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:18.930645  522370 type.go:168] "Request Body" body=""
	I1206 10:31:18.930720  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:18.931093  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:19.430810  522370 type.go:168] "Request Body" body=""
	I1206 10:31:19.430884  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:19.431254  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:19.431307  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:19.930711  522370 type.go:168] "Request Body" body=""
	I1206 10:31:19.930781  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:19.931116  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:20.430790  522370 type.go:168] "Request Body" body=""
	I1206 10:31:20.430893  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:20.431290  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:20.931043  522370 type.go:168] "Request Body" body=""
	I1206 10:31:20.931148  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:20.931503  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:21.431268  522370 type.go:168] "Request Body" body=""
	I1206 10:31:21.431356  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:21.431682  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:21.431723  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:21.931490  522370 type.go:168] "Request Body" body=""
	I1206 10:31:21.931570  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:21.931895  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:22.431704  522370 type.go:168] "Request Body" body=""
	I1206 10:31:22.431783  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:22.432137  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:22.930934  522370 type.go:168] "Request Body" body=""
	I1206 10:31:22.931013  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:22.931330  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:23.430728  522370 type.go:168] "Request Body" body=""
	I1206 10:31:23.430800  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:23.431163  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:23.930907  522370 type.go:168] "Request Body" body=""
	I1206 10:31:23.931011  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:23.931347  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:23.931408  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:24.430723  522370 type.go:168] "Request Body" body=""
	I1206 10:31:24.430793  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:24.431100  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:24.930781  522370 type.go:168] "Request Body" body=""
	I1206 10:31:24.930881  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:24.931205  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:25.430719  522370 type.go:168] "Request Body" body=""
	I1206 10:31:25.430793  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:25.431146  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:25.930743  522370 type.go:168] "Request Body" body=""
	I1206 10:31:25.930825  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:25.931098  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:26.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:31:26.430853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:26.431230  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:26.431285  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:26.930800  522370 type.go:168] "Request Body" body=""
	I1206 10:31:26.930898  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:26.931198  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:27.431688  522370 type.go:168] "Request Body" body=""
	I1206 10:31:27.431783  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:27.432074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:27.931195  522370 type.go:168] "Request Body" body=""
	I1206 10:31:27.931291  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:27.931692  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:28.431526  522370 type.go:168] "Request Body" body=""
	I1206 10:31:28.431657  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:28.432017  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:28.432087  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:28.930685  522370 type.go:168] "Request Body" body=""
	I1206 10:31:28.930798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:28.931176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:29.430715  522370 type.go:168] "Request Body" body=""
	I1206 10:31:29.430787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:29.431113  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:29.930720  522370 type.go:168] "Request Body" body=""
	I1206 10:31:29.930795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:29.931147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:30.430735  522370 type.go:168] "Request Body" body=""
	I1206 10:31:30.430809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:30.431203  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:30.930763  522370 type.go:168] "Request Body" body=""
	I1206 10:31:30.930838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:30.931220  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:30.931276  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:31.430923  522370 type.go:168] "Request Body" body=""
	I1206 10:31:31.430999  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:31.431356  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:31.931034  522370 type.go:168] "Request Body" body=""
	I1206 10:31:31.931102  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:31.931394  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:32.430894  522370 type.go:168] "Request Body" body=""
	I1206 10:31:32.430974  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:32.431350  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:32.931206  522370 type.go:168] "Request Body" body=""
	I1206 10:31:32.931296  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:32.931626  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:32.931683  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:33.431202  522370 type.go:168] "Request Body" body=""
	I1206 10:31:33.431271  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:33.431607  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:33.931401  522370 type.go:168] "Request Body" body=""
	I1206 10:31:33.931476  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:33.931817  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:34.431625  522370 type.go:168] "Request Body" body=""
	I1206 10:31:34.431714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:34.432035  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:34.931669  522370 type.go:168] "Request Body" body=""
	I1206 10:31:34.931742  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:34.932009  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:34.932053  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:35.430771  522370 type.go:168] "Request Body" body=""
	I1206 10:31:35.430852  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:35.431237  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:35.930935  522370 type.go:168] "Request Body" body=""
	I1206 10:31:35.931012  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:35.931347  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:36.430721  522370 type.go:168] "Request Body" body=""
	I1206 10:31:36.430797  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:36.431104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:36.930741  522370 type.go:168] "Request Body" body=""
	I1206 10:31:36.930820  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:36.931208  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:37.430713  522370 type.go:168] "Request Body" body=""
	I1206 10:31:37.430790  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:37.431167  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:37.431222  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:37.931252  522370 type.go:168] "Request Body" body=""
	I1206 10:31:37.931330  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:37.931655  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:38.431472  522370 type.go:168] "Request Body" body=""
	I1206 10:31:38.431546  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:38.431863  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:38.930659  522370 type.go:168] "Request Body" body=""
	I1206 10:31:38.930734  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:38.931062  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:39.430764  522370 type.go:168] "Request Body" body=""
	I1206 10:31:39.430838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:39.431171  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:39.930872  522370 type.go:168] "Request Body" body=""
	I1206 10:31:39.931015  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:39.931393  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:39.931453  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:40.431186  522370 type.go:168] "Request Body" body=""
	I1206 10:31:40.431263  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:40.431606  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:40.931379  522370 type.go:168] "Request Body" body=""
	I1206 10:31:40.931446  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:40.931701  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:41.431485  522370 type.go:168] "Request Body" body=""
	I1206 10:31:41.431564  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:41.431887  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:41.930643  522370 type.go:168] "Request Body" body=""
	I1206 10:31:41.930718  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:41.931057  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:42.430753  522370 type.go:168] "Request Body" body=""
	I1206 10:31:42.430823  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:42.431171  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:42.431219  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:42.931185  522370 type.go:168] "Request Body" body=""
	I1206 10:31:42.931265  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:42.931600  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:43.431298  522370 type.go:168] "Request Body" body=""
	I1206 10:31:43.431370  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:43.431690  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:43.931472  522370 type.go:168] "Request Body" body=""
	I1206 10:31:43.931550  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:43.931859  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:44.431577  522370 type.go:168] "Request Body" body=""
	I1206 10:31:44.431700  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:44.432084  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:44.432138  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:44.930770  522370 type.go:168] "Request Body" body=""
	I1206 10:31:44.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:44.931206  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:45.430733  522370 type.go:168] "Request Body" body=""
	I1206 10:31:45.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:45.431161  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:45.930853  522370 type.go:168] "Request Body" body=""
	I1206 10:31:45.930932  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:45.931318  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:46.430766  522370 type.go:168] "Request Body" body=""
	I1206 10:31:46.430845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:46.431204  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:46.930747  522370 type.go:168] "Request Body" body=""
	I1206 10:31:46.930820  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:46.931099  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:46.931170  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:47.430770  522370 type.go:168] "Request Body" body=""
	I1206 10:31:47.430858  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:47.431194  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:47.931329  522370 type.go:168] "Request Body" body=""
	I1206 10:31:47.931412  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:47.931751  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:48.431557  522370 type.go:168] "Request Body" body=""
	I1206 10:31:48.431630  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:48.431921  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:48.930683  522370 type.go:168] "Request Body" body=""
	I1206 10:31:48.930756  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:48.931083  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:49.430810  522370 type.go:168] "Request Body" body=""
	I1206 10:31:49.430898  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:49.431254  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:49.431313  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:49.930720  522370 type.go:168] "Request Body" body=""
	I1206 10:31:49.930793  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:49.931110  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:50.430770  522370 type.go:168] "Request Body" body=""
	I1206 10:31:50.430874  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:50.431234  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:50.931041  522370 type.go:168] "Request Body" body=""
	I1206 10:31:50.931153  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:50.931493  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:51.431234  522370 type.go:168] "Request Body" body=""
	I1206 10:31:51.431312  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:51.431631  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:51.431691  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:51.931489  522370 type.go:168] "Request Body" body=""
	I1206 10:31:51.931580  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:51.931981  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:52.430704  522370 type.go:168] "Request Body" body=""
	I1206 10:31:52.430806  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:52.431144  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:52.930913  522370 type.go:168] "Request Body" body=""
	I1206 10:31:52.930987  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:52.931309  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:53.430741  522370 type.go:168] "Request Body" body=""
	I1206 10:31:53.430813  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:53.431186  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:53.930898  522370 type.go:168] "Request Body" body=""
	I1206 10:31:53.930988  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:53.931350  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:53.931408  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:54.431065  522370 type.go:168] "Request Body" body=""
	I1206 10:31:54.431152  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:54.431403  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:54.931103  522370 type.go:168] "Request Body" body=""
	I1206 10:31:54.931201  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:54.931542  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.431350  522370 type.go:168] "Request Body" body=""
	I1206 10:31:55.431428  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:55.431748  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.931464  522370 type.go:168] "Request Body" body=""
	I1206 10:31:55.931536  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:55.931792  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:55.931832  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:56.431629  522370 type.go:168] "Request Body" body=""
	I1206 10:31:56.431704  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:56.432065  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:56.930782  522370 type.go:168] "Request Body" body=""
	I1206 10:31:56.930863  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:56.931219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:57.430905  522370 type.go:168] "Request Body" body=""
	I1206 10:31:57.430978  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:57.431276  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:57.931572  522370 type.go:168] "Request Body" body=""
	I1206 10:31:57.931656  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:57.931998  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:57.932052  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:58.430762  522370 type.go:168] "Request Body" body=""
	I1206 10:31:58.430841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:58.431216  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:58.930737  522370 type.go:168] "Request Body" body=""
	I1206 10:31:58.930807  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:58.931055  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:59.430703  522370 type.go:168] "Request Body" body=""
	I1206 10:31:59.430788  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:59.431185  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:59.930748  522370 type.go:168] "Request Body" body=""
	I1206 10:31:59.930832  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:59.931193  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:00.430923  522370 type.go:168] "Request Body" body=""
	I1206 10:32:00.431018  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:00.431383  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:00.431435  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:00.930749  522370 type.go:168] "Request Body" body=""
	I1206 10:32:00.930823  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:00.931167  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:01.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:32:01.430987  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:01.431290  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:01.930735  522370 type.go:168] "Request Body" body=""
	I1206 10:32:01.930846  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:01.931177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:02.430793  522370 type.go:168] "Request Body" body=""
	I1206 10:32:02.430870  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:02.431209  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:02.931198  522370 type.go:168] "Request Body" body=""
	I1206 10:32:02.931274  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:02.931612  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:02.931666  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:03.431269  522370 type.go:168] "Request Body" body=""
	I1206 10:32:03.431341  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:03.431598  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:03.931409  522370 type.go:168] "Request Body" body=""
	I1206 10:32:03.931493  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:03.931843  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:04.431512  522370 type.go:168] "Request Body" body=""
	I1206 10:32:04.431588  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:04.431937  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:04.930649  522370 type.go:168] "Request Body" body=""
	I1206 10:32:04.930727  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:04.930996  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:05.430715  522370 type.go:168] "Request Body" body=""
	I1206 10:32:05.430789  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:05.431147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:05.431201  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:05.930878  522370 type.go:168] "Request Body" body=""
	I1206 10:32:05.930961  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:05.931320  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:06.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:32:06.430798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:06.431112  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:06.930766  522370 type.go:168] "Request Body" body=""
	I1206 10:32:06.930839  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:06.931201  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:07.430760  522370 type.go:168] "Request Body" body=""
	I1206 10:32:07.430842  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:07.431197  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:07.431255  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:07.931421  522370 type.go:168] "Request Body" body=""
	I1206 10:32:07.931493  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:07.931819  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:08.431684  522370 type.go:168] "Request Body" body=""
	I1206 10:32:08.431770  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:08.432111  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:08.930828  522370 type.go:168] "Request Body" body=""
	I1206 10:32:08.930926  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:08.931327  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:09.430732  522370 type.go:168] "Request Body" body=""
	I1206 10:32:09.430804  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:09.431070  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:09.930755  522370 type.go:168] "Request Body" body=""
	I1206 10:32:09.930836  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:09.931203  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:09.931266  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:10.430768  522370 type.go:168] "Request Body" body=""
	I1206 10:32:10.430879  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:10.431217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:10.930889  522370 type.go:168] "Request Body" body=""
	I1206 10:32:10.930960  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:10.931259  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:11.430792  522370 type.go:168] "Request Body" body=""
	I1206 10:32:11.430872  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:11.431253  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:11.930964  522370 type.go:168] "Request Body" body=""
	I1206 10:32:11.931039  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:11.931369  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:11.931419  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:12.430849  522370 type.go:168] "Request Body" body=""
	I1206 10:32:12.430927  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:12.431323  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:12.931326  522370 type.go:168] "Request Body" body=""
	I1206 10:32:12.931399  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:12.931728  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:13.431494  522370 type.go:168] "Request Body" body=""
	I1206 10:32:13.431575  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:13.431906  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:13.931683  522370 type.go:168] "Request Body" body=""
	I1206 10:32:13.931761  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:13.932130  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:13.932175  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:14.430772  522370 type.go:168] "Request Body" body=""
	I1206 10:32:14.430846  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:14.431201  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:14.930793  522370 type.go:168] "Request Body" body=""
	I1206 10:32:14.930874  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:14.931260  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:15.430763  522370 type.go:168] "Request Body" body=""
	I1206 10:32:15.430896  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:15.431300  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:15.930804  522370 type.go:168] "Request Body" body=""
	I1206 10:32:15.930877  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:15.931264  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:16.431005  522370 type.go:168] "Request Body" body=""
	I1206 10:32:16.431079  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:16.431470  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:16.431521  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:16.931228  522370 type.go:168] "Request Body" body=""
	I1206 10:32:16.931295  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:16.931553  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:17.431420  522370 type.go:168] "Request Body" body=""
	I1206 10:32:17.431494  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:17.431814  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:17.930648  522370 type.go:168] "Request Body" body=""
	I1206 10:32:17.930727  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:17.931063  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:18.430755  522370 type.go:168] "Request Body" body=""
	I1206 10:32:18.430879  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:18.431264  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:18.930777  522370 type.go:168] "Request Body" body=""
	I1206 10:32:18.930861  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:18.931245  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:18.931300  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:19.430824  522370 type.go:168] "Request Body" body=""
	I1206 10:32:19.430907  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:19.431295  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:19.930976  522370 type.go:168] "Request Body" body=""
	I1206 10:32:19.931045  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:19.931324  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:20.431030  522370 type.go:168] "Request Body" body=""
	I1206 10:32:20.431116  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:20.431515  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:20.931305  522370 type.go:168] "Request Body" body=""
	I1206 10:32:20.931378  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:20.931713  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:20.931766  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:21.431535  522370 type.go:168] "Request Body" body=""
	I1206 10:32:21.431650  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:21.431909  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:21.931668  522370 type.go:168] "Request Body" body=""
	I1206 10:32:21.931751  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:21.932103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:22.430721  522370 type.go:168] "Request Body" body=""
	I1206 10:32:22.430809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:22.431161  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:22.931142  522370 type.go:168] "Request Body" body=""
	I1206 10:32:22.931211  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:22.931472  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:23.431308  522370 type.go:168] "Request Body" body=""
	I1206 10:32:23.431380  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:23.431717  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:23.431770  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:23.931606  522370 type.go:168] "Request Body" body=""
	I1206 10:32:23.931684  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:23.932028  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:24.430722  522370 type.go:168] "Request Body" body=""
	I1206 10:32:24.430852  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:24.431235  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:24.930776  522370 type.go:168] "Request Body" body=""
	I1206 10:32:24.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:24.931200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:25.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:32:25.430855  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:25.431194  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:25.930892  522370 type.go:168] "Request Body" body=""
	I1206 10:32:25.930959  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:25.931238  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:25.931278  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:26.430787  522370 type.go:168] "Request Body" body=""
	I1206 10:32:26.430873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:26.431231  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:26.930954  522370 type.go:168] "Request Body" body=""
	I1206 10:32:26.931033  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:26.931398  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:27.431111  522370 type.go:168] "Request Body" body=""
	I1206 10:32:27.431201  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:27.431504  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:27.931658  522370 type.go:168] "Request Body" body=""
	I1206 10:32:27.931732  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:27.932069  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:27.932132  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:28.430761  522370 type.go:168] "Request Body" body=""
	I1206 10:32:28.430838  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:28.431176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:28.930711  522370 type.go:168] "Request Body" body=""
	I1206 10:32:28.930783  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:28.931095  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:29.430760  522370 type.go:168] "Request Body" body=""
	I1206 10:32:29.430841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:29.431191  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:29.930785  522370 type.go:168] "Request Body" body=""
	I1206 10:32:29.930863  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:29.931232  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:30.430912  522370 type.go:168] "Request Body" body=""
	I1206 10:32:30.430988  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:30.431300  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:30.431356  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:30.930756  522370 type.go:168] "Request Body" body=""
	I1206 10:32:30.930830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:30.931179  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:31.430752  522370 type.go:168] "Request Body" body=""
	I1206 10:32:31.430836  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:31.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:31.930917  522370 type.go:168] "Request Body" body=""
	I1206 10:32:31.930986  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:31.931271  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:32.430794  522370 type.go:168] "Request Body" body=""
	I1206 10:32:32.430881  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:32.431249  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:32.931263  522370 type.go:168] "Request Body" body=""
	I1206 10:32:32.931386  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:32.931723  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:32.931782  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:33.431496  522370 type.go:168] "Request Body" body=""
	I1206 10:32:33.431581  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:33.431932  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:33.931657  522370 type.go:168] "Request Body" body=""
	I1206 10:32:33.931736  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:33.932091  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:34.430713  522370 type.go:168] "Request Body" body=""
	I1206 10:32:34.430790  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:34.431152  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:34.930699  522370 type.go:168] "Request Body" body=""
	I1206 10:32:34.930768  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:34.931073  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:35.430758  522370 type.go:168] "Request Body" body=""
	I1206 10:32:35.430837  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:35.431193  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:35.431247  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:35.930738  522370 type.go:168] "Request Body" body=""
	I1206 10:32:35.930816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:35.931165  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:36.430720  522370 type.go:168] "Request Body" body=""
	I1206 10:32:36.430791  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:36.431113  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:36.930734  522370 type.go:168] "Request Body" body=""
	I1206 10:32:36.930816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:36.931108  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:37.430776  522370 type.go:168] "Request Body" body=""
	I1206 10:32:37.430857  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:37.431190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:37.931387  522370 type.go:168] "Request Body" body=""
	I1206 10:32:37.931455  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:37.931795  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:37.931855  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:38.431623  522370 type.go:168] "Request Body" body=""
	I1206 10:32:38.431705  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:38.432052  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:38.930772  522370 type.go:168] "Request Body" body=""
	I1206 10:32:38.930850  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:38.931196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:39.430728  522370 type.go:168] "Request Body" body=""
	I1206 10:32:39.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:39.431143  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:39.930742  522370 type.go:168] "Request Body" body=""
	I1206 10:32:39.930822  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:39.931187  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:40.430754  522370 type.go:168] "Request Body" body=""
	I1206 10:32:40.430831  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:40.431262  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:40.431317  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:40.930972  522370 type.go:168] "Request Body" body=""
	I1206 10:32:40.931048  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:40.931346  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:41.430760  522370 type.go:168] "Request Body" body=""
	I1206 10:32:41.430833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:41.431192  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:41.930757  522370 type.go:168] "Request Body" body=""
	I1206 10:32:41.930829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:41.931180  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:42.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:32:42.430816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:42.431140  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:42.931171  522370 type.go:168] "Request Body" body=""
	I1206 10:32:42.931246  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:42.931610  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:42.931666  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:43.431315  522370 type.go:168] "Request Body" body=""
	I1206 10:32:43.431391  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:43.431734  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:43.931465  522370 type.go:168] "Request Body" body=""
	I1206 10:32:43.931536  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:43.931803  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:44.431545  522370 type.go:168] "Request Body" body=""
	I1206 10:32:44.431622  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:44.431960  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:44.931639  522370 type.go:168] "Request Body" body=""
	I1206 10:32:44.931734  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:44.932055  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:44.932114  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:45.430772  522370 type.go:168] "Request Body" body=""
	I1206 10:32:45.430845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:45.431116  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:45.930776  522370 type.go:168] "Request Body" body=""
	I1206 10:32:45.930868  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:45.931291  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:46.430766  522370 type.go:168] "Request Body" body=""
	I1206 10:32:46.430841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:46.431191  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:46.930863  522370 type.go:168] "Request Body" body=""
	I1206 10:32:46.930930  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:46.931212  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:47.430796  522370 type.go:168] "Request Body" body=""
	I1206 10:32:47.430887  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:47.431295  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:47.431359  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:47.931474  522370 type.go:168] "Request Body" body=""
	I1206 10:32:47.931560  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:47.931907  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:48.431677  522370 type.go:168] "Request Body" body=""
	I1206 10:32:48.431748  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:48.432085  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:48.930788  522370 type.go:168] "Request Body" body=""
	I1206 10:32:48.930871  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:48.931291  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:49.430869  522370 type.go:168] "Request Body" body=""
	I1206 10:32:49.430950  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:49.431291  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:49.930725  522370 type.go:168] "Request Body" body=""
	I1206 10:32:49.930794  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:49.931082  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:49.931153  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:50.430853  522370 type.go:168] "Request Body" body=""
	I1206 10:32:50.430949  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:50.431284  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:50.930721  522370 type.go:168] "Request Body" body=""
	I1206 10:32:50.930802  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:50.931146  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:51.430715  522370 type.go:168] "Request Body" body=""
	I1206 10:32:51.430795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:51.431104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:51.930818  522370 type.go:168] "Request Body" body=""
	I1206 10:32:51.930895  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:51.931285  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:51.931368  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:52.431078  522370 type.go:168] "Request Body" body=""
	I1206 10:32:52.431180  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:52.431482  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:52.931409  522370 type.go:168] "Request Body" body=""
	I1206 10:32:52.931482  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:52.931752  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:53.431547  522370 type.go:168] "Request Body" body=""
	I1206 10:32:53.431624  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:53.431945  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:53.930683  522370 type.go:168] "Request Body" body=""
	I1206 10:32:53.930759  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:53.931085  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:54.430729  522370 type.go:168] "Request Body" body=""
	I1206 10:32:54.430803  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:54.431094  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:54.431169  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:54.930721  522370 type.go:168] "Request Body" body=""
	I1206 10:32:54.930796  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:54.931156  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:55.430745  522370 type.go:168] "Request Body" body=""
	I1206 10:32:55.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:55.431164  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:55.930849  522370 type.go:168] "Request Body" body=""
	I1206 10:32:55.930915  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:55.931210  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:56.430891  522370 type.go:168] "Request Body" body=""
	I1206 10:32:56.430970  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:56.431338  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:56.431397  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:56.930910  522370 type.go:168] "Request Body" body=""
	I1206 10:32:56.930994  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:56.931313  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:57.430984  522370 type.go:168] "Request Body" body=""
	I1206 10:32:57.431057  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:57.431352  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:57.931626  522370 type.go:168] "Request Body" body=""
	I1206 10:32:57.931699  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:57.932050  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:58.430670  522370 type.go:168] "Request Body" body=""
	I1206 10:32:58.430747  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:58.431102  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:58.930730  522370 type.go:168] "Request Body" body=""
	I1206 10:32:58.930798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:58.931062  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:58.931101  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:59.430796  522370 type.go:168] "Request Body" body=""
	I1206 10:32:59.430871  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:59.431207  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:59.930920  522370 type.go:168] "Request Body" body=""
	I1206 10:32:59.930996  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:59.931373  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:00.431073  522370 type.go:168] "Request Body" body=""
	I1206 10:33:00.431174  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:00.431454  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:00.931159  522370 type.go:168] "Request Body" body=""
	I1206 10:33:00.931233  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:00.931593  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:00.931646  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:01.431429  522370 type.go:168] "Request Body" body=""
	I1206 10:33:01.431506  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:01.431854  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:01.931651  522370 type.go:168] "Request Body" body=""
	I1206 10:33:01.931722  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:01.932003  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:02.430670  522370 type.go:168] "Request Body" body=""
	I1206 10:33:02.430745  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:02.431108  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:02.930915  522370 type.go:168] "Request Body" body=""
	I1206 10:33:02.930990  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:02.931336  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:03.431009  522370 type.go:168] "Request Body" body=""
	I1206 10:33:03.431081  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:03.431417  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:03.431470  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:03.930755  522370 type.go:168] "Request Body" body=""
	I1206 10:33:03.930829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:03.931200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:04.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:33:04.430822  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:04.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:04.930891  522370 type.go:168] "Request Body" body=""
	I1206 10:33:04.930967  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:04.931354  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:05.430778  522370 type.go:168] "Request Body" body=""
	I1206 10:33:05.430860  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:05.431250  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:05.930755  522370 type.go:168] "Request Body" body=""
	I1206 10:33:05.930835  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:05.931189  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:05.931249  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:06.430896  522370 type.go:168] "Request Body" body=""
	I1206 10:33:06.430973  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:06.431278  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:06.930733  522370 type.go:168] "Request Body" body=""
	I1206 10:33:06.930807  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:06.931165  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:07.430744  522370 type.go:168] "Request Body" body=""
	I1206 10:33:07.430825  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:07.431177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:07.931223  522370 type.go:168] "Request Body" body=""
	I1206 10:33:07.931292  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:07.931564  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:07.931604  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:08.431432  522370 type.go:168] "Request Body" body=""
	I1206 10:33:08.431521  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:08.431859  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:08.931644  522370 type.go:168] "Request Body" body=""
	I1206 10:33:08.931724  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:08.932093  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:09.430763  522370 type.go:168] "Request Body" body=""
	I1206 10:33:09.430862  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:09.431255  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:09.930767  522370 type.go:168] "Request Body" body=""
	I1206 10:33:09.930849  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:09.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:10.430945  522370 type.go:168] "Request Body" body=""
	I1206 10:33:10.431022  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:10.431384  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:10.431441  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:10.931100  522370 type.go:168] "Request Body" body=""
	I1206 10:33:10.931186  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:10.931443  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:11.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:33:11.430818  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:11.431167  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:11.930886  522370 type.go:168] "Request Body" body=""
	I1206 10:33:11.930967  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:11.931341  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:12.431022  522370 type.go:168] "Request Body" body=""
	I1206 10:33:12.431093  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:12.431430  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:12.431487  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:12.931422  522370 type.go:168] "Request Body" body=""
	I1206 10:33:12.931498  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:12.931813  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:13.431634  522370 type.go:168] "Request Body" body=""
	I1206 10:33:13.431707  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:13.432041  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:13.930727  522370 type.go:168] "Request Body" body=""
	I1206 10:33:13.930806  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:13.931116  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:14.430764  522370 type.go:168] "Request Body" body=""
	I1206 10:33:14.430843  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:14.431197  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:14.930912  522370 type.go:168] "Request Body" body=""
	I1206 10:33:14.930993  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:14.931381  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:14.931437  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:15.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:33:15.430795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:15.431103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:15.930758  522370 type.go:168] "Request Body" body=""
	I1206 10:33:15.930830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:15.931180  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:16.430880  522370 type.go:168] "Request Body" body=""
	I1206 10:33:16.430966  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:16.431327  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:16.930724  522370 type.go:168] "Request Body" body=""
	I1206 10:33:16.930789  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:16.931103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:17.430923  522370 type.go:168] "Request Body" body=""
	I1206 10:33:17.430996  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:17.431378  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:17.431433  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:17.931311  522370 type.go:168] "Request Body" body=""
	I1206 10:33:17.931390  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:17.931703  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:18.431499  522370 type.go:168] "Request Body" body=""
	I1206 10:33:18.431573  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:18.431859  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:18.931659  522370 type.go:168] "Request Body" body=""
	I1206 10:33:18.931728  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:18.932101  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:19.430669  522370 type.go:168] "Request Body" body=""
	I1206 10:33:19.430749  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:19.431091  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:19.930819  522370 type.go:168] "Request Body" body=""
	I1206 10:33:19.930896  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:19.931201  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:19.931264  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:20.430727  522370 type.go:168] "Request Body" body=""
	I1206 10:33:20.430804  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:20.431145  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:20.930747  522370 type.go:168] "Request Body" body=""
	I1206 10:33:20.930830  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:20.931225  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:21.430895  522370 type.go:168] "Request Body" body=""
	I1206 10:33:21.430968  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:21.431276  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:21.930735  522370 type.go:168] "Request Body" body=""
	I1206 10:33:21.930814  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:21.931153  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:22.430742  522370 type.go:168] "Request Body" body=""
	I1206 10:33:22.430815  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:22.431176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:22.431236  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:22.930959  522370 type.go:168] "Request Body" body=""
	I1206 10:33:22.931032  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:22.931315  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:23.430982  522370 type.go:168] "Request Body" body=""
	I1206 10:33:23.431057  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:23.431412  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:23.931141  522370 type.go:168] "Request Body" body=""
	I1206 10:33:23.931222  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:23.931520  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:24.431230  522370 type.go:168] "Request Body" body=""
	I1206 10:33:24.431303  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:24.431559  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:24.431598  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:24.931419  522370 type.go:168] "Request Body" body=""
	I1206 10:33:24.931497  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:24.931798  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:25.431590  522370 type.go:168] "Request Body" body=""
	I1206 10:33:25.431664  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:25.432003  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:25.930717  522370 type.go:168] "Request Body" body=""
	I1206 10:33:25.930787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:25.931105  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:26.430730  522370 type.go:168] "Request Body" body=""
	I1206 10:33:26.430803  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:26.431170  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:26.930777  522370 type.go:168] "Request Body" body=""
	I1206 10:33:26.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:26.931184  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:26.931237  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:27.430726  522370 type.go:168] "Request Body" body=""
	I1206 10:33:27.430818  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:27.431145  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:27.931181  522370 type.go:168] "Request Body" body=""
	I1206 10:33:27.931266  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:27.931566  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:28.431438  522370 type.go:168] "Request Body" body=""
	I1206 10:33:28.431510  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:28.431869  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:28.931537  522370 type.go:168] "Request Body" body=""
	I1206 10:33:28.931618  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:28.931903  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:28.931960  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:29.430674  522370 type.go:168] "Request Body" body=""
	I1206 10:33:29.430755  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:29.431137  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:29.930914  522370 type.go:168] "Request Body" body=""
	I1206 10:33:29.930990  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:29.931351  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:30.431026  522370 type.go:168] "Request Body" body=""
	I1206 10:33:30.431102  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:30.431376  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:30.930781  522370 type.go:168] "Request Body" body=""
	I1206 10:33:30.930873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:30.931192  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:31.430878  522370 type.go:168] "Request Body" body=""
	I1206 10:33:31.430956  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:31.431307  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:31.431363  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:31.930818  522370 type.go:168] "Request Body" body=""
	I1206 10:33:31.930894  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:31.931174  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:32.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:33:32.430850  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:32.431192  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:32.931213  522370 type.go:168] "Request Body" body=""
	I1206 10:33:32.931287  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:32.931623  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:33.431374  522370 type.go:168] "Request Body" body=""
	I1206 10:33:33.431441  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:33.431690  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:33.431729  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:33.931533  522370 type.go:168] "Request Body" body=""
	I1206 10:33:33.931612  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:33.931952  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:34.430686  522370 type.go:168] "Request Body" body=""
	I1206 10:33:34.430769  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:34.431100  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:34.930721  522370 type.go:168] "Request Body" body=""
	I1206 10:33:34.930796  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:34.931111  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:35.430773  522370 type.go:168] "Request Body" body=""
	I1206 10:33:35.430854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:35.431209  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:35.930752  522370 type.go:168] "Request Body" body=""
	I1206 10:33:35.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:35.931211  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:35.931270  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:36.430716  522370 type.go:168] "Request Body" body=""
	I1206 10:33:36.430789  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:36.431117  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:36.930838  522370 type.go:168] "Request Body" body=""
	I1206 10:33:36.930915  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:36.931278  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:37.430762  522370 type.go:168] "Request Body" body=""
	I1206 10:33:37.430839  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:37.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:37.931227  522370 type.go:168] "Request Body" body=""
	I1206 10:33:37.931308  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:37.931579  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:37.931629  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:38.431327  522370 type.go:168] "Request Body" body=""
	I1206 10:33:38.431398  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:38.431755  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:38.931430  522370 type.go:168] "Request Body" body=""
	I1206 10:33:38.931512  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:38.931837  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:39.431622  522370 type.go:168] "Request Body" body=""
	I1206 10:33:39.431687  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:39.431948  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:39.930714  522370 type.go:168] "Request Body" body=""
	I1206 10:33:39.930788  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:39.931147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:40.430846  522370 type.go:168] "Request Body" body=""
	I1206 10:33:40.430923  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:40.431265  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:40.431320  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:40.930719  522370 type.go:168] "Request Body" body=""
	I1206 10:33:40.930795  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:40.931103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:41.430879  522370 type.go:168] "Request Body" body=""
	I1206 10:33:41.430958  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:41.431368  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:41.931083  522370 type.go:168] "Request Body" body=""
	I1206 10:33:41.931178  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:41.931515  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:42.431226  522370 type.go:168] "Request Body" body=""
	I1206 10:33:42.431297  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:42.431581  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:42.431622  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:42.931519  522370 type.go:168] "Request Body" body=""
	I1206 10:33:42.931593  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:42.931924  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:43.431683  522370 type.go:168] "Request Body" body=""
	I1206 10:33:43.431760  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:43.432078  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:43.930714  522370 type.go:168] "Request Body" body=""
	I1206 10:33:43.930784  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:43.931091  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:44.430727  522370 type.go:168] "Request Body" body=""
	I1206 10:33:44.430805  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:44.431177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:44.930741  522370 type.go:168] "Request Body" body=""
	I1206 10:33:44.930820  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:44.931173  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:44.931227  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:45.430724  522370 type.go:168] "Request Body" body=""
	I1206 10:33:45.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:45.431154  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:45.930742  522370 type.go:168] "Request Body" body=""
	I1206 10:33:45.930816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:45.931177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:46.430876  522370 type.go:168] "Request Body" body=""
	I1206 10:33:46.430959  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:46.431313  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:46.930987  522370 type.go:168] "Request Body" body=""
	I1206 10:33:46.931061  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:46.931413  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:46.931474  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:47.430745  522370 type.go:168] "Request Body" body=""
	I1206 10:33:47.430826  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:47.431205  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:47.931381  522370 type.go:168] "Request Body" body=""
	I1206 10:33:47.931468  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:47.931814  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:48.431456  522370 type.go:168] "Request Body" body=""
	I1206 10:33:48.431530  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:48.431817  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:48.931583  522370 type.go:168] "Request Body" body=""
	I1206 10:33:48.931659  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:48.932002  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:48.932055  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:49.431685  522370 type.go:168] "Request Body" body=""
	I1206 10:33:49.431764  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:49.432103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:49.930780  522370 type.go:168] "Request Body" body=""
	I1206 10:33:49.930855  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:49.931113  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:50.430744  522370 type.go:168] "Request Body" body=""
	I1206 10:33:50.430816  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:50.431162  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:50.930749  522370 type.go:168] "Request Body" body=""
	I1206 10:33:50.930827  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:50.931179  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:51.430725  522370 type.go:168] "Request Body" body=""
	I1206 10:33:51.430805  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:51.431143  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:51.431197  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:51.930877  522370 type.go:168] "Request Body" body=""
	I1206 10:33:51.930958  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:51.931307  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:52.431057  522370 type.go:168] "Request Body" body=""
	I1206 10:33:52.431157  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:52.431503  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:52.931288  522370 type.go:168] "Request Body" body=""
	I1206 10:33:52.931355  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:52.931612  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:53.431346  522370 type.go:168] "Request Body" body=""
	I1206 10:33:53.431421  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:53.431742  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:53.431799  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:53.931572  522370 type.go:168] "Request Body" body=""
	I1206 10:33:53.931647  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:53.931997  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:54.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:33:54.430806  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:54.431078  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:54.930776  522370 type.go:168] "Request Body" body=""
	I1206 10:33:54.930849  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:54.931177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:55.430770  522370 type.go:168] "Request Body" body=""
	I1206 10:33:55.430844  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:55.431172  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:55.930729  522370 type.go:168] "Request Body" body=""
	I1206 10:33:55.930801  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:55.931082  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:55.931151  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:56.430895  522370 type.go:168] "Request Body" body=""
	I1206 10:33:56.430967  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:56.431320  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:56.931032  522370 type.go:168] "Request Body" body=""
	I1206 10:33:56.931110  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:56.931459  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:57.430829  522370 type.go:168] "Request Body" body=""
	I1206 10:33:57.430902  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:57.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:57.931275  522370 type.go:168] "Request Body" body=""
	I1206 10:33:57.931349  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:57.931687  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:57.931743  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:58.431516  522370 type.go:168] "Request Body" body=""
	I1206 10:33:58.431614  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:58.431939  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:58.930634  522370 type.go:168] "Request Body" body=""
	I1206 10:33:58.930706  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:58.930955  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:59.430684  522370 type.go:168] "Request Body" body=""
	I1206 10:33:59.430758  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:59.431050  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:59.930637  522370 type.go:168] "Request Body" body=""
	I1206 10:33:59.930735  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:59.931074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:00.430781  522370 type.go:168] "Request Body" body=""
	I1206 10:34:00.430869  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:00.431217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:00.431315  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:00.930729  522370 type.go:168] "Request Body" body=""
	I1206 10:34:00.930820  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:00.931148  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:01.430848  522370 type.go:168] "Request Body" body=""
	I1206 10:34:01.430922  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:01.431286  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:01.930713  522370 type.go:168] "Request Body" body=""
	I1206 10:34:01.930791  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:01.931110  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:02.430761  522370 type.go:168] "Request Body" body=""
	I1206 10:34:02.430835  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:02.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:02.931031  522370 type.go:168] "Request Body" body=""
	I1206 10:34:02.931109  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:02.931453  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:02.931514  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:03.430966  522370 type.go:168] "Request Body" body=""
	I1206 10:34:03.431062  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:03.431375  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:03.930731  522370 type.go:168] "Request Body" body=""
	I1206 10:34:03.930814  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:03.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:04.430751  522370 type.go:168] "Request Body" body=""
	I1206 10:34:04.430825  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:04.431168  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:04.930717  522370 type.go:168] "Request Body" body=""
	I1206 10:34:04.930787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:04.931097  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:05.430797  522370 type.go:168] "Request Body" body=""
	I1206 10:34:05.430873  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:05.431234  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:05.431295  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:05.930980  522370 type.go:168] "Request Body" body=""
	I1206 10:34:05.931058  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:05.931414  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:06.430713  522370 type.go:168] "Request Body" body=""
	I1206 10:34:06.430787  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:06.431089  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:06.930764  522370 type.go:168] "Request Body" body=""
	I1206 10:34:06.930844  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:06.931244  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:07.430820  522370 type.go:168] "Request Body" body=""
	I1206 10:34:07.430894  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:07.431251  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:07.931445  522370 type.go:168] "Request Body" body=""
	I1206 10:34:07.931516  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:07.931771  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:07.931812  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:08.431524  522370 type.go:168] "Request Body" body=""
	I1206 10:34:08.431601  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:08.431921  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:08.930678  522370 type.go:168] "Request Body" body=""
	I1206 10:34:08.930767  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:08.931174  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:09.430817  522370 type.go:168] "Request Body" body=""
	I1206 10:34:09.430892  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:09.431194  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:09.930925  522370 type.go:168] "Request Body" body=""
	I1206 10:34:09.931018  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:09.931371  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:10.430770  522370 type.go:168] "Request Body" body=""
	I1206 10:34:10.430853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:10.431202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:10.431255  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:10.930764  522370 type.go:168] "Request Body" body=""
	I1206 10:34:10.930831  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:10.931090  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:11.430809  522370 type.go:168] "Request Body" body=""
	I1206 10:34:11.430882  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:11.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:11.930776  522370 type.go:168] "Request Body" body=""
	I1206 10:34:11.930851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:11.931212  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:12.430751  522370 type.go:168] "Request Body" body=""
	I1206 10:34:12.430822  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:12.431076  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:12.930963  522370 type.go:168] "Request Body" body=""
	I1206 10:34:12.931034  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:12.931391  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:12.931447  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:13.430984  522370 type.go:168] "Request Body" body=""
	I1206 10:34:13.431059  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:13.431405  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:13.930730  522370 type.go:168] "Request Body" body=""
	I1206 10:34:13.930807  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:13.931082  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:14.430699  522370 type.go:168] "Request Body" body=""
	I1206 10:34:14.430785  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:14.431147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:14.930773  522370 type.go:168] "Request Body" body=""
	I1206 10:34:14.930855  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:14.931210  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:15.430739  522370 type.go:168] "Request Body" body=""
	I1206 10:34:15.430808  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:15.431058  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:15.431101  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:15.930737  522370 type.go:168] "Request Body" body=""
	I1206 10:34:15.930809  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:15.931163  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:16.430877  522370 type.go:168] "Request Body" body=""
	I1206 10:34:16.430949  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:16.431309  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:16.930711  522370 type.go:168] "Request Body" body=""
	I1206 10:34:16.930788  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:16.931088  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:17.430798  522370 type.go:168] "Request Body" body=""
	I1206 10:34:17.430879  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:17.431230  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:17.431288  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:17.931511  522370 type.go:168] "Request Body" body=""
	I1206 10:34:17.931612  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:17.931976  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:18.431590  522370 type.go:168] "Request Body" body=""
	I1206 10:34:18.431659  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:18.432004  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:18.930728  522370 type.go:168] "Request Body" body=""
	I1206 10:34:18.930808  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:18.931147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:19.430863  522370 type.go:168] "Request Body" body=""
	I1206 10:34:19.430939  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:19.431293  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:19.431346  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:19.930992  522370 type.go:168] "Request Body" body=""
	I1206 10:34:19.931064  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:19.931410  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:20.430778  522370 type.go:168] "Request Body" body=""
	I1206 10:34:20.430854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:20.431219  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:20.931558  522370 type.go:168] "Request Body" body=""
	I1206 10:34:20.931639  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:20.931987  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:21.430701  522370 type.go:168] "Request Body" body=""
	I1206 10:34:21.430786  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:21.431147  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:21.930753  522370 type.go:168] "Request Body" body=""
	I1206 10:34:21.930827  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:21.931172  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:21.931232  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:22.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:34:22.430999  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:22.431346  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:22.931270  522370 type.go:168] "Request Body" body=""
	I1206 10:34:22.931368  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:22.931817  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:23.431585  522370 type.go:168] "Request Body" body=""
	I1206 10:34:23.431659  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:23.431973  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:23.930685  522370 type.go:168] "Request Body" body=""
	I1206 10:34:23.930759  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:23.931087  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:24.430797  522370 type.go:168] "Request Body" body=""
	I1206 10:34:24.430872  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:24.431117  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:24.431176  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:24.930806  522370 type.go:168] "Request Body" body=""
	I1206 10:34:24.930882  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:24.931202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:25.430788  522370 type.go:168] "Request Body" body=""
	I1206 10:34:25.430861  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:25.431188  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:25.930868  522370 type.go:168] "Request Body" body=""
	I1206 10:34:25.930939  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:25.931218  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:26.430758  522370 type.go:168] "Request Body" body=""
	I1206 10:34:26.430834  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:26.431213  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:26.431274  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:26.930768  522370 type.go:168] "Request Body" body=""
	I1206 10:34:26.930845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:26.931192  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:27.430884  522370 type.go:168] "Request Body" body=""
	I1206 10:34:27.430960  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:27.431252  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:27.931325  522370 type.go:168] "Request Body" body=""
	I1206 10:34:27.931408  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:27.931744  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:28.431435  522370 type.go:168] "Request Body" body=""
	I1206 10:34:28.431523  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:28.431850  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:28.431909  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:28.931644  522370 type.go:168] "Request Body" body=""
	I1206 10:34:28.931714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:28.931970  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:29.430721  522370 type.go:168] "Request Body" body=""
	I1206 10:34:29.430803  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:29.431141  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:29.930784  522370 type.go:168] "Request Body" body=""
	I1206 10:34:29.930859  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:29.931176  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:30.430844  522370 type.go:168] "Request Body" body=""
	I1206 10:34:30.430919  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:30.431210  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:30.930768  522370 type.go:168] "Request Body" body=""
	I1206 10:34:30.930851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:30.931235  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:30.931295  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:31.430810  522370 type.go:168] "Request Body" body=""
	I1206 10:34:31.430887  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:31.431198  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:31.930745  522370 type.go:168] "Request Body" body=""
	I1206 10:34:31.930813  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:31.931077  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:32.430753  522370 type.go:168] "Request Body" body=""
	I1206 10:34:32.430840  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:32.431195  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:32.931075  522370 type.go:168] "Request Body" body=""
	I1206 10:34:32.931167  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:32.931468  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:32.931518  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:33.431100  522370 type.go:168] "Request Body" body=""
	I1206 10:34:33.431184  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:33.431485  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:33.930775  522370 type.go:168] "Request Body" body=""
	I1206 10:34:33.930855  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:33.931221  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:34.430796  522370 type.go:168] "Request Body" body=""
	I1206 10:34:34.430877  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:34.431210  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:34.930739  522370 type.go:168] "Request Body" body=""
	I1206 10:34:34.930818  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:34.931162  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:35.430773  522370 type.go:168] "Request Body" body=""
	I1206 10:34:35.430856  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:35.431214  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:35.431268  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:35.930868  522370 type.go:168] "Request Body" body=""
	I1206 10:34:35.930944  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:35.931315  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:36.430720  522370 type.go:168] "Request Body" body=""
	I1206 10:34:36.430791  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:36.431040  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:36.930739  522370 type.go:168] "Request Body" body=""
	I1206 10:34:36.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:36.931195  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:37.430910  522370 type.go:168] "Request Body" body=""
	I1206 10:34:37.430986  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:37.431301  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:37.431348  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:37.931302  522370 type.go:168] "Request Body" body=""
	I1206 10:34:37.931371  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:37.931629  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:38.431530  522370 type.go:168] "Request Body" body=""
	I1206 10:34:38.431619  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:38.431930  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:38.930656  522370 type.go:168] "Request Body" body=""
	I1206 10:34:38.930736  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:38.931104  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:39.430791  522370 type.go:168] "Request Body" body=""
	I1206 10:34:39.430869  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:39.431157  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:39.930904  522370 type.go:168] "Request Body" body=""
	I1206 10:34:39.930984  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:39.931350  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:39.931412  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:40.431091  522370 type.go:168] "Request Body" body=""
	I1206 10:34:40.431191  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:40.431534  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:40.931277  522370 type.go:168] "Request Body" body=""
	I1206 10:34:40.931349  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:40.931605  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:41.431406  522370 type.go:168] "Request Body" body=""
	I1206 10:34:41.431517  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:41.431838  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:41.931609  522370 type.go:168] "Request Body" body=""
	I1206 10:34:41.931696  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:41.932047  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:41.932102  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:42.430748  522370 type.go:168] "Request Body" body=""
	I1206 10:34:42.430824  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:42.431103  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:42.931215  522370 type.go:168] "Request Body" body=""
	I1206 10:34:42.931317  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:42.931648  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:43.431450  522370 type.go:168] "Request Body" body=""
	I1206 10:34:43.431526  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:43.431858  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:43.931579  522370 type.go:168] "Request Body" body=""
	I1206 10:34:43.931659  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:43.931991  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:44.431656  522370 type.go:168] "Request Body" body=""
	I1206 10:34:44.431730  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:44.432129  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:44.432185  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:44.930734  522370 type.go:168] "Request Body" body=""
	I1206 10:34:44.930810  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:44.931202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:45.430889  522370 type.go:168] "Request Body" body=""
	I1206 10:34:45.430961  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:45.431255  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:45.930943  522370 type.go:168] "Request Body" body=""
	I1206 10:34:45.931026  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:45.931431  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:46.430744  522370 type.go:168] "Request Body" body=""
	I1206 10:34:46.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:46.431156  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:46.930832  522370 type.go:168] "Request Body" body=""
	I1206 10:34:46.930896  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:46.931177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:46.931219  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:47.430865  522370 type.go:168] "Request Body" body=""
	I1206 10:34:47.430941  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:47.431318  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:47.931392  522370 type.go:168] "Request Body" body=""
	I1206 10:34:47.931469  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:47.931802  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:48.431602  522370 type.go:168] "Request Body" body=""
	I1206 10:34:48.431696  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:48.432026  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:48.930775  522370 type.go:168] "Request Body" body=""
	I1206 10:34:48.930851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:48.931294  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:48.931353  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:49.431025  522370 type.go:168] "Request Body" body=""
	I1206 10:34:49.431108  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:49.431448  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:49.930724  522370 type.go:168] "Request Body" body=""
	I1206 10:34:49.930802  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:49.931116  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:50.430791  522370 type.go:168] "Request Body" body=""
	I1206 10:34:50.430867  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:50.431248  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:50.930784  522370 type.go:168] "Request Body" body=""
	I1206 10:34:50.930864  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:50.931205  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:51.430733  522370 type.go:168] "Request Body" body=""
	I1206 10:34:51.430811  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:51.431080  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:51.431150  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:51.930837  522370 type.go:168] "Request Body" body=""
	I1206 10:34:51.930930  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:51.931324  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:52.430775  522370 type.go:168] "Request Body" body=""
	I1206 10:34:52.430851  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:52.431202  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:52.931267  522370 type.go:168] "Request Body" body=""
	I1206 10:34:52.931348  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:52.931664  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:53.431501  522370 type.go:168] "Request Body" body=""
	I1206 10:34:53.431595  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:53.431957  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:53.432013  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:53.931658  522370 type.go:168] "Request Body" body=""
	I1206 10:34:53.931738  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:53.932077  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:54.430749  522370 type.go:168] "Request Body" body=""
	I1206 10:34:54.430871  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:54.431247  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:54.930761  522370 type.go:168] "Request Body" body=""
	I1206 10:34:54.930837  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:54.931206  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:55.430922  522370 type.go:168] "Request Body" body=""
	I1206 10:34:55.431013  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:55.431352  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:55.930722  522370 type.go:168] "Request Body" body=""
	I1206 10:34:55.930788  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:55.931160  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:55.931217  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:56.430917  522370 type.go:168] "Request Body" body=""
	I1206 10:34:56.430995  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:56.431296  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:56.930987  522370 type.go:168] "Request Body" body=""
	I1206 10:34:56.931062  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:56.931423  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:57.430960  522370 type.go:168] "Request Body" body=""
	I1206 10:34:57.431029  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:57.431303  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:57.931550  522370 type.go:168] "Request Body" body=""
	I1206 10:34:57.931631  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:57.931966  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:57.932029  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:58.430730  522370 type.go:168] "Request Body" body=""
	I1206 10:34:58.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:58.431155  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:58.930843  522370 type.go:168] "Request Body" body=""
	I1206 10:34:58.930914  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:58.931207  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:59.430875  522370 type.go:168] "Request Body" body=""
	I1206 10:34:59.430950  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:59.431266  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:59.930814  522370 type.go:168] "Request Body" body=""
	I1206 10:34:59.930906  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:59.931260  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:00.430976  522370 type.go:168] "Request Body" body=""
	I1206 10:35:00.431061  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:00.431541  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:00.431605  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:00.931369  522370 type.go:168] "Request Body" body=""
	I1206 10:35:00.931476  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:00.931758  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:01.431561  522370 type.go:168] "Request Body" body=""
	I1206 10:35:01.431652  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:01.432065  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:01.930651  522370 type.go:168] "Request Body" body=""
	I1206 10:35:01.930724  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:01.930990  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:02.430729  522370 type.go:168] "Request Body" body=""
	I1206 10:35:02.430828  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:02.431196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:02.931011  522370 type.go:168] "Request Body" body=""
	I1206 10:35:02.931089  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:02.931442  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:02.931498  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:03.430733  522370 type.go:168] "Request Body" body=""
	I1206 10:35:03.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:03.431059  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:03.930760  522370 type.go:168] "Request Body" body=""
	I1206 10:35:03.930833  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:03.931180  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:04.430888  522370 type.go:168] "Request Body" body=""
	I1206 10:35:04.430974  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:04.431297  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:04.930734  522370 type.go:168] "Request Body" body=""
	I1206 10:35:04.930812  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:04.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:05.430766  522370 type.go:168] "Request Body" body=""
	I1206 10:35:05.430845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:05.431226  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:05.431281  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:05.930825  522370 type.go:168] "Request Body" body=""
	I1206 10:35:05.930901  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:05.931256  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:06.430727  522370 type.go:168] "Request Body" body=""
	I1206 10:35:06.430799  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:06.431148  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:06.930753  522370 type.go:168] "Request Body" body=""
	I1206 10:35:06.930834  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:06.931217  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:07.430915  522370 type.go:168] "Request Body" body=""
	I1206 10:35:07.430991  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:07.431345  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:07.431402  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:07.931619  522370 type.go:168] "Request Body" body=""
	I1206 10:35:07.931687  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:07.931937  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:08.430638  522370 type.go:168] "Request Body" body=""
	I1206 10:35:08.430708  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:08.431059  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:08.930771  522370 type.go:168] "Request Body" body=""
	I1206 10:35:08.930854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:08.931232  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:09.430960  522370 type.go:168] "Request Body" body=""
	I1206 10:35:09.431028  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:09.431338  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:09.930769  522370 type.go:168] "Request Body" body=""
	I1206 10:35:09.930843  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:09.931199  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:09.931252  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:10.430779  522370 type.go:168] "Request Body" body=""
	I1206 10:35:10.430854  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:10.431226  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:10.930761  522370 type.go:168] "Request Body" body=""
	I1206 10:35:10.930829  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:10.931111  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:11.430901  522370 type.go:168] "Request Body" body=""
	I1206 10:35:11.430975  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:11.431323  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:11.930769  522370 type.go:168] "Request Body" body=""
	I1206 10:35:11.930846  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:11.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:12.430718  522370 type.go:168] "Request Body" body=""
	I1206 10:35:12.430798  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:12.431146  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:12.431211  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:12.931230  522370 type.go:168] "Request Body" body=""
	I1206 10:35:12.931308  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:12.931636  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:13.431462  522370 type.go:168] "Request Body" body=""
	I1206 10:35:13.431538  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:13.431885  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:13.931641  522370 type.go:168] "Request Body" body=""
	I1206 10:35:13.931714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:13.931987  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:14.430767  522370 type.go:168] "Request Body" body=""
	I1206 10:35:14.430841  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:14.431200  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:14.431257  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:14.930975  522370 type.go:168] "Request Body" body=""
	I1206 10:35:14.931053  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:14.931466  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:15.431217  522370 type.go:168] "Request Body" body=""
	I1206 10:35:15.431297  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:15.431580  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:15.931377  522370 type.go:168] "Request Body" body=""
	I1206 10:35:15.931454  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:15.931796  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:16.431484  522370 type.go:168] "Request Body" body=""
	I1206 10:35:16.431559  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:16.431888  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:16.431945  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:16.931644  522370 type.go:168] "Request Body" body=""
	I1206 10:35:16.931713  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:16.931977  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:17.430728  522370 type.go:168] "Request Body" body=""
	I1206 10:35:17.430856  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:17.431208  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:17.931466  522370 type.go:168] "Request Body" body=""
	I1206 10:35:17.931549  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:17.931886  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:18.431642  522370 type.go:168] "Request Body" body=""
	I1206 10:35:18.431714  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:18.431964  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:18.432006  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:18.930687  522370 type.go:168] "Request Body" body=""
	I1206 10:35:18.930760  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:18.931117  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:19.430852  522370 type.go:168] "Request Body" body=""
	I1206 10:35:19.430938  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:19.431325  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:19.930751  522370 type.go:168] "Request Body" body=""
	I1206 10:35:19.930852  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:19.931255  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:20.430723  522370 type.go:168] "Request Body" body=""
	I1206 10:35:20.430804  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:20.431177  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:20.930767  522370 type.go:168] "Request Body" body=""
	I1206 10:35:20.930845  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:20.931190  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:20.931244  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:21.430727  522370 type.go:168] "Request Body" body=""
	I1206 10:35:21.430804  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:21.431059  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:21.930732  522370 type.go:168] "Request Body" body=""
	I1206 10:35:21.930815  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:21.931186  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:22.430734  522370 type.go:168] "Request Body" body=""
	I1206 10:35:22.430810  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:22.431194  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:22.931191  522370 type.go:168] "Request Body" body=""
	I1206 10:35:22.931266  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:22.931524  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:22.931567  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:23.431346  522370 type.go:168] "Request Body" body=""
	I1206 10:35:23.431424  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:23.431932  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:23.930769  522370 type.go:168] "Request Body" body=""
	I1206 10:35:23.930843  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:23.931196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:24.430741  522370 type.go:168] "Request Body" body=""
	I1206 10:35:24.430821  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:24.431074  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:24.930743  522370 type.go:168] "Request Body" body=""
	I1206 10:35:24.930825  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:24.931196  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:25.430898  522370 type.go:168] "Request Body" body=""
	I1206 10:35:25.430975  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:25.431343  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:25.431399  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:25.931031  522370 type.go:168] "Request Body" body=""
	I1206 10:35:25.931103  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:25.931404  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:26.430767  522370 type.go:168] "Request Body" body=""
	I1206 10:35:26.430843  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:26.431170  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:26.930780  522370 type.go:168] "Request Body" body=""
	I1206 10:35:26.930853  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:26.931215  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:27.430765  522370 type.go:168] "Request Body" body=""
	I1206 10:35:27.430836  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:27.431109  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:27.931322  522370 type.go:168] "Request Body" body=""
	I1206 10:35:27.931408  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:27.931759  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:27.931820  522370 node_ready.go:55] error getting node "functional-123579" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-123579": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:28.430752  522370 type.go:168] "Request Body" body=""
	I1206 10:35:28.430847  522370 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-123579" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:28.431179  522370 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:28.930742  522370 type.go:168] "Request Body" body=""
	I1206 10:35:28.930795  522370 node_ready.go:38] duration metric: took 6m0.000265171s for node "functional-123579" to be "Ready" ...
	I1206 10:35:28.934235  522370 out.go:203] 
	W1206 10:35:28.937230  522370 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 10:35:28.937255  522370 out.go:285] * 
	W1206 10:35:28.939411  522370 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:35:28.942269  522370 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:35:37 functional-123579 crio[5369]: time="2025-12-06T10:35:37.825946902Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=26927795-6759-44f8-add4-c26f908ce36b name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:38 functional-123579 crio[5369]: time="2025-12-06T10:35:38.901587472Z" level=info msg="Checking image status: minikube-local-cache-test:functional-123579" id=88bd426f-8c5f-4fbf-aba5-2f5838db28ed name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:38 functional-123579 crio[5369]: time="2025-12-06T10:35:38.901763378Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 10:35:38 functional-123579 crio[5369]: time="2025-12-06T10:35:38.901802261Z" level=info msg="Image minikube-local-cache-test:functional-123579 not found" id=88bd426f-8c5f-4fbf-aba5-2f5838db28ed name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:38 functional-123579 crio[5369]: time="2025-12-06T10:35:38.901873653Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-123579 found" id=88bd426f-8c5f-4fbf-aba5-2f5838db28ed name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:38 functional-123579 crio[5369]: time="2025-12-06T10:35:38.925177932Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-123579" id=106d2dcb-4ec2-48ec-9614-c919c8de59ae name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:38 functional-123579 crio[5369]: time="2025-12-06T10:35:38.925312296Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-123579 not found" id=106d2dcb-4ec2-48ec-9614-c919c8de59ae name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:38 functional-123579 crio[5369]: time="2025-12-06T10:35:38.925358605Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-123579 found" id=106d2dcb-4ec2-48ec-9614-c919c8de59ae name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:38 functional-123579 crio[5369]: time="2025-12-06T10:35:38.949512637Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-123579" id=bedae279-2bdf-4e1d-bd1a-fd270469b349 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:38 functional-123579 crio[5369]: time="2025-12-06T10:35:38.949644146Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-123579 not found" id=bedae279-2bdf-4e1d-bd1a-fd270469b349 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:38 functional-123579 crio[5369]: time="2025-12-06T10:35:38.949683243Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-123579 found" id=bedae279-2bdf-4e1d-bd1a-fd270469b349 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:39 functional-123579 crio[5369]: time="2025-12-06T10:35:39.957711018Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=841d613d-f718-4e71-9062-0e61fb23cf91 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.314941799Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=a2b6cfb4-2779-4f28-9ac1-b03566db9430 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.315155956Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=a2b6cfb4-2779-4f28-9ac1-b03566db9430 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.315207007Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=a2b6cfb4-2779-4f28-9ac1-b03566db9430 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.937195525Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=a10dd09a-06ad-4ccd-95d6-4421c496e934 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.937380563Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=a10dd09a-06ad-4ccd-95d6-4421c496e934 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.937444512Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=a10dd09a-06ad-4ccd-95d6-4421c496e934 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.969660306Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=72a41cda-abbf-41f3-942e-80fc94d1c824 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.96981396Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=72a41cda-abbf-41f3-942e-80fc94d1c824 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.969854682Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=72a41cda-abbf-41f3-942e-80fc94d1c824 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.994752542Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=e8c13efc-96c7-4c82-882d-ac67484cb38d name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.994886389Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=e8c13efc-96c7-4c82-882d-ac67484cb38d name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:40 functional-123579 crio[5369]: time="2025-12-06T10:35:40.994922737Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=e8c13efc-96c7-4c82-882d-ac67484cb38d name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:35:41 functional-123579 crio[5369]: time="2025-12-06T10:35:41.547211049Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=88a049bd-ddb4-4825-8e05-272f51d7ed42 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:35:45.886855    9553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:35:45.887352    9553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:35:45.889057    9553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:35:45.889514    9553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:35:45.891181    9553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:20] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000983] FS-Cache: O-cookie d=000000005fa08aa9{9P.session} n=00000000effdd306
	[  +0.001108] FS-Cache: O-key=[10] '34323935383339353739'
	[  +0.000774] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001064] FS-Cache: N-cookie d=000000005fa08aa9{9P.session} n=00000000d1a54e80
	[  +0.001158] FS-Cache: N-key=[10] '34323935383339353739'
	[Dec 6 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.091742] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:17] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:18] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:35:45 up  3:18,  0 user,  load average: 0.32, 0.31, 0.82
	Linux functional-123579 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:35:43 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:35:43 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1157.
	Dec 06 10:35:43 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:43 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:44 functional-123579 kubelet[9429]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:44 functional-123579 kubelet[9429]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:44 functional-123579 kubelet[9429]: E1206 10:35:44.006018    9429 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:35:44 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:35:44 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:35:44 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1158.
	Dec 06 10:35:44 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:44 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:44 functional-123579 kubelet[9456]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:44 functional-123579 kubelet[9456]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:44 functional-123579 kubelet[9456]: E1206 10:35:44.731257    9456 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:35:44 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:35:44 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:35:45 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1159.
	Dec 06 10:35:45 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:45 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:35:45 functional-123579 kubelet[9469]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:45 functional-123579 kubelet[9469]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:35:45 functional-123579 kubelet[9469]: E1206 10:35:45.502379    9469 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:35:45 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:35:45 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579: exit status 2 (350.063914ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-123579" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (734.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-123579 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1206 10:38:35.937792  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:40:13.261345  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:41:36.323290  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:43:35.939324  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:45:13.255410  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-123579 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m12.386940007s)

                                                
                                                
-- stdout --
	* [functional-123579] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-123579" primary control-plane node in "functional-123579" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000163749s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000249913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000249913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-123579 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m12.388256548s for "functional-123579" cluster.
I1206 10:47:59.280579  488068 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-123579
helpers_test.go:243: (dbg) docker inspect functional-123579:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	        "Created": "2025-12-06T10:21:05.490589445Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 516908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:21:05.573219423Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hostname",
	        "HostsPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hosts",
	        "LogPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721-json.log",
	        "Name": "/functional-123579",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-123579:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-123579",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	                "LowerDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f-init/diff:/var/lib/docker/overlay2/cc06c0f1f442a7275dc247974ca9074508813cfb842de89bc5bb1dae1e824222/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-123579",
	                "Source": "/var/lib/docker/volumes/functional-123579/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-123579",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-123579",
	                "name.minikube.sigs.k8s.io": "functional-123579",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10921d51d4ec866d78853297249318b04ef864639c8e07349985c5733ba03a26",
	            "SandboxKey": "/var/run/docker/netns/10921d51d4ec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-123579": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:5b:29:c4:a4:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa75a7cb7ddfb7086d66f629904d681a84e2c9da78725396c4dc859cfc5aa536",
	                    "EndpointID": "eff9632b5a6c335169f4a61b3c9f1727c30b30183ac61ac9730ddb7b0d19cf24",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-123579",
	                        "86e8d3865f80"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579: exit status 2 (322.719646ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-123579 logs -n 25: (1.278135235s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-137526 image ls --format yaml --alsologtostderr                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ ssh     │ functional-137526 ssh pgrep buildkitd                                                                                                             │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ image   │ functional-137526 image build -t localhost/my-image:functional-137526 testdata/build --alsologtostderr                                            │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image   │ functional-137526 image ls --format json --alsologtostderr                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image   │ functional-137526 image ls --format table --alsologtostderr                                                                                       │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image   │ functional-137526 image ls                                                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ delete  │ -p functional-137526                                                                                                                              │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:21 UTC │
	│ start   │ -p functional-123579 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:21 UTC │                     │
	│ start   │ -p functional-123579 --alsologtostderr -v=8                                                                                                       │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:29 UTC │                     │
	│ cache   │ functional-123579 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ functional-123579 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ functional-123579 cache add registry.k8s.io/pause:latest                                                                                          │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ functional-123579 cache add minikube-local-cache-test:functional-123579                                                                           │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ functional-123579 cache delete minikube-local-cache-test:functional-123579                                                                        │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ ssh     │ functional-123579 ssh sudo crictl images                                                                                                          │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ ssh     │ functional-123579 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ ssh     │ functional-123579 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │                     │
	│ cache   │ functional-123579 cache reload                                                                                                                    │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ ssh     │ functional-123579 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ kubectl │ functional-123579 kubectl -- --context functional-123579 get pods                                                                                 │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │                     │
	│ start   │ -p functional-123579 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:35:46
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:35:46.955658  528268 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:35:46.955828  528268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:35:46.955833  528268 out.go:374] Setting ErrFile to fd 2...
	I1206 10:35:46.955837  528268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:35:46.956177  528268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:35:46.956655  528268 out.go:368] Setting JSON to false
	I1206 10:35:46.957664  528268 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11898,"bootTime":1765005449,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:35:46.957734  528268 start.go:143] virtualization:  
	I1206 10:35:46.961283  528268 out.go:179] * [functional-123579] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:35:46.964510  528268 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:35:46.964613  528268 notify.go:221] Checking for updates...
	I1206 10:35:46.968278  528268 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:35:46.971356  528268 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:35:46.974199  528268 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:35:46.977104  528268 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:35:46.980765  528268 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:35:46.984213  528268 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:35:46.984322  528268 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:35:47.012645  528268 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:35:47.012749  528268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:35:47.074577  528268 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:35:47.064697556 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:35:47.074671  528268 docker.go:319] overlay module found
	I1206 10:35:47.077640  528268 out.go:179] * Using the docker driver based on existing profile
	I1206 10:35:47.080521  528268 start.go:309] selected driver: docker
	I1206 10:35:47.080533  528268 start.go:927] validating driver "docker" against &{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:35:47.080637  528268 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:35:47.080758  528268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:35:47.138440  528268 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:35:47.128848609 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:35:47.138821  528268 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:35:47.138844  528268 cni.go:84] Creating CNI manager for ""
	I1206 10:35:47.138899  528268 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:35:47.138936  528268 start.go:353] cluster config:
	{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:35:47.144166  528268 out.go:179] * Starting "functional-123579" primary control-plane node in "functional-123579" cluster
	I1206 10:35:47.147068  528268 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:35:47.149949  528268 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:35:47.152780  528268 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:35:47.152816  528268 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:35:47.152824  528268 cache.go:65] Caching tarball of preloaded images
	I1206 10:35:47.152870  528268 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:35:47.152921  528268 preload.go:238] Found /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:35:47.152931  528268 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:35:47.153043  528268 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/config.json ...
	I1206 10:35:47.172511  528268 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:35:47.172523  528268 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:35:47.172545  528268 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:35:47.172580  528268 start.go:360] acquireMachinesLock for functional-123579: {Name:mk35a9adf20f50a3c49b774a4ee092917f16cc66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:35:47.172652  528268 start.go:364] duration metric: took 54.497µs to acquireMachinesLock for "functional-123579"
	I1206 10:35:47.172672  528268 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:35:47.172676  528268 fix.go:54] fixHost starting: 
	I1206 10:35:47.172937  528268 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:35:47.189604  528268 fix.go:112] recreateIfNeeded on functional-123579: state=Running err=<nil>
	W1206 10:35:47.189624  528268 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:35:47.192615  528268 out.go:252] * Updating the running docker "functional-123579" container ...
	I1206 10:35:47.192637  528268 machine.go:94] provisionDockerMachine start ...
	I1206 10:35:47.192731  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:47.209670  528268 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:47.209990  528268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:35:47.209996  528268 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:35:47.362840  528268 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:35:47.362854  528268 ubuntu.go:182] provisioning hostname "functional-123579"
	I1206 10:35:47.362918  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:47.381544  528268 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:47.381860  528268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:35:47.381868  528268 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-123579 && echo "functional-123579" | sudo tee /etc/hostname
	I1206 10:35:47.544930  528268 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:35:47.545031  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:47.563487  528268 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:47.563810  528268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:35:47.563823  528268 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-123579' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-123579/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-123579' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:35:47.717170  528268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:35:47.717187  528268 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-484819/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-484819/.minikube}
	I1206 10:35:47.717204  528268 ubuntu.go:190] setting up certificates
	I1206 10:35:47.717211  528268 provision.go:84] configureAuth start
	I1206 10:35:47.717282  528268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:35:47.741856  528268 provision.go:143] copyHostCerts
	I1206 10:35:47.741924  528268 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem, removing ...
	I1206 10:35:47.741936  528268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 10:35:47.742009  528268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem (1082 bytes)
	I1206 10:35:47.742105  528268 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem, removing ...
	I1206 10:35:47.742109  528268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 10:35:47.742132  528268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem (1123 bytes)
	I1206 10:35:47.742180  528268 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem, removing ...
	I1206 10:35:47.742184  528268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 10:35:47.742206  528268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem (1675 bytes)
	I1206 10:35:47.742252  528268 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem org=jenkins.functional-123579 san=[127.0.0.1 192.168.49.2 functional-123579 localhost minikube]
	I1206 10:35:47.924439  528268 provision.go:177] copyRemoteCerts
	I1206 10:35:47.924500  528268 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:35:47.924538  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:47.942367  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:48.047397  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:35:48.065928  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:35:48.085149  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 10:35:48.103937  528268 provision.go:87] duration metric: took 386.701009ms to configureAuth
	I1206 10:35:48.103956  528268 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:35:48.104161  528268 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:35:48.104265  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.122386  528268 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:48.122699  528268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:35:48.122711  528268 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:35:48.484149  528268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:35:48.484161  528268 machine.go:97] duration metric: took 1.291517603s to provisionDockerMachine
	I1206 10:35:48.484171  528268 start.go:293] postStartSetup for "functional-123579" (driver="docker")
	I1206 10:35:48.484183  528268 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:35:48.484243  528268 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:35:48.484311  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.507680  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:48.615171  528268 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:35:48.618416  528268 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:35:48.618434  528268 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:35:48.618444  528268 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/addons for local assets ...
	I1206 10:35:48.618496  528268 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/files for local assets ...
	I1206 10:35:48.618569  528268 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> 4880682.pem in /etc/ssl/certs
	I1206 10:35:48.618650  528268 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts -> hosts in /etc/test/nested/copy/488068
	I1206 10:35:48.618693  528268 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488068
	I1206 10:35:48.626464  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:35:48.643882  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts --> /etc/test/nested/copy/488068/hosts (40 bytes)
	I1206 10:35:48.662582  528268 start.go:296] duration metric: took 178.395271ms for postStartSetup
	I1206 10:35:48.662675  528268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:35:48.662713  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.680751  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:48.784322  528268 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:35:48.789238  528268 fix.go:56] duration metric: took 1.616554387s for fixHost
	I1206 10:35:48.789253  528268 start.go:83] releasing machines lock for "functional-123579", held for 1.616594099s
	I1206 10:35:48.789324  528268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:35:48.807477  528268 ssh_runner.go:195] Run: cat /version.json
	I1206 10:35:48.807520  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.807562  528268 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:35:48.807618  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.828942  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:48.845083  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:49.020126  528268 ssh_runner.go:195] Run: systemctl --version
	I1206 10:35:49.026608  528268 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:35:49.065500  528268 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:35:49.069961  528268 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:35:49.070024  528268 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:35:49.077978  528268 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:35:49.077992  528268 start.go:496] detecting cgroup driver to use...
	I1206 10:35:49.078033  528268 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:35:49.078078  528268 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:35:49.093402  528268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:35:49.106707  528268 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:35:49.106771  528268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:35:49.122603  528268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:35:49.135424  528268 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:35:49.251969  528268 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:35:49.384025  528268 docker.go:234] disabling docker service ...
	I1206 10:35:49.384082  528268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:35:49.398904  528268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:35:49.412283  528268 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:35:49.535452  528268 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:35:49.651851  528268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:35:49.665735  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:35:49.680503  528268 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:35:49.680561  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.689947  528268 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:35:49.690006  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.699358  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.708725  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.718744  528268 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:35:49.727534  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.737013  528268 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.745582  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.754308  528268 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:35:49.762144  528268 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:35:49.769875  528268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:35:49.884338  528268 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:35:50.052236  528268 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:35:50.052348  528268 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:35:50.057582  528268 start.go:564] Will wait 60s for crictl version
	I1206 10:35:50.057651  528268 ssh_runner.go:195] Run: which crictl
	I1206 10:35:50.062638  528268 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:35:50.100652  528268 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:35:50.100743  528268 ssh_runner.go:195] Run: crio --version
	I1206 10:35:50.139579  528268 ssh_runner.go:195] Run: crio --version
	I1206 10:35:50.174800  528268 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:35:50.177732  528268 cli_runner.go:164] Run: docker network inspect functional-123579 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:35:50.194850  528268 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:35:50.201950  528268 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 10:35:50.204938  528268 kubeadm.go:884] updating cluster {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:35:50.205078  528268 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:35:50.205145  528268 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:35:50.240680  528268 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:35:50.240692  528268 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:35:50.240750  528268 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:35:50.267939  528268 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:35:50.267955  528268 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:35:50.267962  528268 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:35:50.268053  528268 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-123579 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:35:50.268129  528268 ssh_runner.go:195] Run: crio config
	I1206 10:35:50.326220  528268 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 10:35:50.326240  528268 cni.go:84] Creating CNI manager for ""
	I1206 10:35:50.326248  528268 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:35:50.326256  528268 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:35:50.326280  528268 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-123579 NodeName:functional-123579 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:35:50.326407  528268 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-123579"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:35:50.326477  528268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:35:50.334319  528268 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:35:50.334378  528268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:35:50.341826  528268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:35:50.354245  528268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:35:50.367015  528268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1206 10:35:50.379350  528268 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:35:50.382958  528268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:35:50.504018  528268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:35:50.930865  528268 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579 for IP: 192.168.49.2
	I1206 10:35:50.930875  528268 certs.go:195] generating shared ca certs ...
	I1206 10:35:50.930889  528268 certs.go:227] acquiring lock for ca certs: {Name:mk654f77abd8383620ce6ddae56f2a6a8c1d96d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:35:50.931046  528268 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key
	I1206 10:35:50.931093  528268 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key
	I1206 10:35:50.931099  528268 certs.go:257] generating profile certs ...
	I1206 10:35:50.931220  528268 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key
	I1206 10:35:50.931274  528268 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key.fda7c087
	I1206 10:35:50.931318  528268 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key
	I1206 10:35:50.931430  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem (1338 bytes)
	W1206 10:35:50.931460  528268 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068_empty.pem, impossibly tiny 0 bytes
	I1206 10:35:50.931466  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 10:35:50.931493  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:35:50.931515  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:35:50.931536  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem (1675 bytes)
	I1206 10:35:50.931577  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:35:50.932148  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:35:50.953643  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:35:50.975543  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:35:50.998708  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 10:35:51.019841  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:35:51.038179  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:35:51.055740  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:35:51.075573  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:35:51.094756  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /usr/share/ca-certificates/4880682.pem (1708 bytes)
	I1206 10:35:51.113922  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:35:51.132368  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem --> /usr/share/ca-certificates/488068.pem (1338 bytes)
	I1206 10:35:51.150650  528268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:35:51.163984  528268 ssh_runner.go:195] Run: openssl version
	I1206 10:35:51.171418  528268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4880682.pem
	I1206 10:35:51.179298  528268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4880682.pem /etc/ssl/certs/4880682.pem
	I1206 10:35:51.187013  528268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4880682.pem
	I1206 10:35:51.190756  528268 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 10:35:51.190814  528268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4880682.pem
	I1206 10:35:51.231889  528268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:35:51.239348  528268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:51.246609  528268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:35:51.254276  528268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:51.258574  528268 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:51.258631  528268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:51.301011  528268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:35:51.308790  528268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488068.pem
	I1206 10:35:51.316400  528268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488068.pem /etc/ssl/certs/488068.pem
	I1206 10:35:51.324195  528268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488068.pem
	I1206 10:35:51.328353  528268 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 10:35:51.328409  528268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488068.pem
	I1206 10:35:51.371753  528268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:35:51.379339  528268 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:35:51.383319  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:35:51.424469  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:35:51.465529  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:35:51.511345  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:35:51.565170  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:35:51.614532  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:35:51.665468  528268 kubeadm.go:401] StartCluster: {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:35:51.665553  528268 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:35:51.665612  528268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:35:51.699589  528268 cri.go:89] found id: ""
	I1206 10:35:51.699652  528268 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:35:51.708250  528268 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:35:51.708260  528268 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:35:51.708318  528268 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:35:51.716593  528268 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:35:51.717135  528268 kubeconfig.go:125] found "functional-123579" server: "https://192.168.49.2:8441"
	I1206 10:35:51.718506  528268 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:35:51.728290  528268 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 10:21:13.758601441 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 10:35:50.371679399 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1206 10:35:51.728307  528268 kubeadm.go:1161] stopping kube-system containers ...
	I1206 10:35:51.728319  528268 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 10:35:51.728381  528268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:35:51.763757  528268 cri.go:89] found id: ""
	I1206 10:35:51.763820  528268 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 10:35:51.777420  528268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:35:51.785097  528268 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  6 10:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  6 10:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  6 10:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  6 10:25 /etc/kubernetes/scheduler.conf
	
	I1206 10:35:51.785162  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:35:51.792642  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:35:51.800316  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:35:51.800387  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:35:51.808313  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:35:51.815662  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:35:51.815715  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:35:51.823153  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:35:51.831093  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:35:51.831167  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:35:51.838577  528268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:35:51.846346  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:51.894809  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:52.979571  528268 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.084737023s)
	I1206 10:35:52.979630  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:53.188528  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:53.255794  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:53.309672  528268 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:35:53.309740  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:53.810758  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:54.309899  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:54.810832  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:55.309958  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:55.809819  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:56.310103  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:56.809902  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:57.309923  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:57.809975  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:58.310731  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:58.809924  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:59.310585  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:59.810731  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:00.309923  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:00.810538  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:01.310473  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:01.810374  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:02.310412  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:02.809925  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:03.309918  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:03.810667  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:04.310497  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:04.810559  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:05.310616  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:05.810787  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:06.310760  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:06.810542  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:07.310481  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:07.810515  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:08.310271  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:08.810300  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:09.309935  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:09.809899  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:10.310756  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:10.809928  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:11.309919  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:11.809916  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:12.310322  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:12.809962  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:13.309904  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:13.809901  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:14.309825  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:14.809939  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:15.309858  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:15.810769  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:16.310915  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:16.809905  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:17.310298  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:17.809935  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:18.310774  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:18.810876  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:19.310588  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:19.810539  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:20.309961  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:20.810313  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:21.310718  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:21.810176  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:22.310761  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:22.809819  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:23.310605  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:23.810607  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:24.310709  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:24.810672  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:25.309883  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:25.810296  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:26.309901  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:26.810157  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:27.310838  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:27.810698  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:28.309956  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:28.809934  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:29.310713  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:29.810598  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:30.310564  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:30.809937  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:31.309915  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:31.810618  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:32.310478  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:32.809942  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:33.310175  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:33.810817  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:34.310221  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:34.810764  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:35.309907  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:35.810700  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:36.310275  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:36.810581  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:37.310397  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:37.809951  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:38.310518  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:38.810174  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:39.310213  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:39.810271  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:40.309911  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:40.810748  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:41.310557  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:41.810632  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:42.309870  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:42.810506  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:43.309942  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:43.810676  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:44.310713  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:44.810703  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:45.310440  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:45.810823  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:46.309845  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:46.810726  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:47.310769  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:47.809917  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:48.310694  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:48.810273  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:49.310273  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:49.810301  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:50.309899  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:50.809907  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:51.309963  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:51.810551  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:52.310532  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:52.810599  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:53.310630  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:36:53.310706  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:36:53.342266  528268 cri.go:89] found id: ""
	I1206 10:36:53.342280  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.342287  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:36:53.342292  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:36:53.342356  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:36:53.368755  528268 cri.go:89] found id: ""
	I1206 10:36:53.368774  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.368781  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:36:53.368785  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:36:53.368846  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:36:53.393431  528268 cri.go:89] found id: ""
	I1206 10:36:53.393447  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.393454  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:36:53.393459  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:36:53.393515  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:36:53.418954  528268 cri.go:89] found id: ""
	I1206 10:36:53.418967  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.418974  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:36:53.418979  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:36:53.419036  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:36:53.444726  528268 cri.go:89] found id: ""
	I1206 10:36:53.444740  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.444747  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:36:53.444752  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:36:53.444809  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:36:53.469041  528268 cri.go:89] found id: ""
	I1206 10:36:53.469054  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.469062  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:36:53.469067  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:36:53.469122  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:36:53.494455  528268 cri.go:89] found id: ""
	I1206 10:36:53.494468  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.494475  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:36:53.494483  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:36:53.494496  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:36:53.557127  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:36:53.549369   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.549959   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.551594   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.551939   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.553382   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:36:53.549369   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.549959   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.551594   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.551939   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.553382   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:36:53.557137  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:36:53.557148  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:36:53.629870  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:36:53.629900  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:36:53.661451  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:36:53.661466  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:36:53.730909  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:36:53.730927  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:36:56.247245  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:56.257306  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:36:56.257364  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:36:56.286141  528268 cri.go:89] found id: ""
	I1206 10:36:56.286155  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.286163  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:36:56.286168  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:36:56.286228  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:36:56.313467  528268 cri.go:89] found id: ""
	I1206 10:36:56.313481  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.313488  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:36:56.313499  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:36:56.313559  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:36:56.340777  528268 cri.go:89] found id: ""
	I1206 10:36:56.340791  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.340798  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:36:56.340803  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:36:56.340862  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:36:56.367085  528268 cri.go:89] found id: ""
	I1206 10:36:56.367099  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.367106  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:36:56.367111  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:36:56.367188  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:36:56.392392  528268 cri.go:89] found id: ""
	I1206 10:36:56.392407  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.392414  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:36:56.392420  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:36:56.392482  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:36:56.417786  528268 cri.go:89] found id: ""
	I1206 10:36:56.417799  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.417807  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:36:56.417812  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:36:56.417871  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:36:56.443872  528268 cri.go:89] found id: ""
	I1206 10:36:56.443886  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.443893  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:36:56.443901  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:36:56.443911  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:36:56.509704  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:36:56.509723  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:36:56.524726  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:36:56.524742  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:36:56.590779  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:36:56.582349   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.583075   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.584764   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.585326   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.586966   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:36:56.582349   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.583075   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.584764   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.585326   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.586966   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:36:56.590789  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:36:56.590799  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:36:56.657863  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:36:56.657883  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:36:59.188879  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:59.199665  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:36:59.199726  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:36:59.232126  528268 cri.go:89] found id: ""
	I1206 10:36:59.232140  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.232148  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:36:59.232153  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:36:59.232212  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:36:59.257550  528268 cri.go:89] found id: ""
	I1206 10:36:59.257564  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.257571  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:36:59.257576  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:36:59.257633  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:36:59.282608  528268 cri.go:89] found id: ""
	I1206 10:36:59.282623  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.282630  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:36:59.282636  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:36:59.282698  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:36:59.312791  528268 cri.go:89] found id: ""
	I1206 10:36:59.312806  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.312813  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:36:59.312819  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:36:59.312881  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:36:59.339361  528268 cri.go:89] found id: ""
	I1206 10:36:59.339376  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.339383  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:36:59.339388  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:36:59.339447  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:36:59.366255  528268 cri.go:89] found id: ""
	I1206 10:36:59.366269  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.366276  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:36:59.366281  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:36:59.366339  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:36:59.394131  528268 cri.go:89] found id: ""
	I1206 10:36:59.394145  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.394152  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:36:59.394172  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:36:59.394182  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:36:59.462514  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:36:59.462536  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:36:59.491731  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:36:59.491747  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:36:59.562406  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:36:59.562426  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:36:59.577286  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:36:59.577302  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:36:59.642145  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:36:59.633850   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.634393   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.636035   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.636643   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.638279   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:36:59.633850   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.634393   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.636035   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.636643   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.638279   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:02.143135  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:02.153343  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:02.153402  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:02.182430  528268 cri.go:89] found id: ""
	I1206 10:37:02.182453  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.182460  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:02.182466  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:02.182529  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:02.217140  528268 cri.go:89] found id: ""
	I1206 10:37:02.217164  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.217171  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:02.217176  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:02.217241  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:02.264761  528268 cri.go:89] found id: ""
	I1206 10:37:02.264775  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.264795  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:02.264800  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:02.264857  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:02.295104  528268 cri.go:89] found id: ""
	I1206 10:37:02.295118  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.295161  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:02.295166  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:02.295232  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:02.324690  528268 cri.go:89] found id: ""
	I1206 10:37:02.324704  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.324711  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:02.324716  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:02.324776  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:02.354165  528268 cri.go:89] found id: ""
	I1206 10:37:02.354179  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.354187  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:02.354192  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:02.354250  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:02.379657  528268 cri.go:89] found id: ""
	I1206 10:37:02.379671  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.379679  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:02.379686  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:02.379697  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:02.449725  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:02.449746  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:02.464766  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:02.464783  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:02.527444  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:02.518942   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.519712   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.521458   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.522038   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.523598   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:02.518942   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.519712   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.521458   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.522038   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.523598   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:02.527457  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:02.527467  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:02.595482  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:02.595503  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:05.126581  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:05.136725  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:05.136783  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:05.162008  528268 cri.go:89] found id: ""
	I1206 10:37:05.162022  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.162049  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:05.162055  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:05.162123  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:05.190290  528268 cri.go:89] found id: ""
	I1206 10:37:05.190305  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.190313  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:05.190318  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:05.190399  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:05.222971  528268 cri.go:89] found id: ""
	I1206 10:37:05.223000  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.223008  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:05.223013  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:05.223083  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:05.249192  528268 cri.go:89] found id: ""
	I1206 10:37:05.249206  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.249213  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:05.249218  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:05.249285  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:05.280084  528268 cri.go:89] found id: ""
	I1206 10:37:05.280097  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.280104  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:05.280110  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:05.280176  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:05.306008  528268 cri.go:89] found id: ""
	I1206 10:37:05.306036  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.306044  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:05.306049  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:05.306115  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:05.331829  528268 cri.go:89] found id: ""
	I1206 10:37:05.331843  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.331850  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:05.331858  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:05.331868  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:05.394775  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:05.386653   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.387484   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.389032   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.389488   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.390957   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:05.386653   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.387484   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.389032   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.389488   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.390957   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:05.394787  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:05.394798  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:05.463063  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:05.463082  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:05.496791  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:05.496808  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:05.562749  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:05.562768  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:08.077865  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:08.088556  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:08.088628  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:08.114942  528268 cri.go:89] found id: ""
	I1206 10:37:08.114956  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.114963  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:08.114969  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:08.115027  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:08.141141  528268 cri.go:89] found id: ""
	I1206 10:37:08.141155  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.141162  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:08.141167  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:08.141235  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:08.166303  528268 cri.go:89] found id: ""
	I1206 10:37:08.166318  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.166325  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:08.166334  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:08.166394  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:08.199234  528268 cri.go:89] found id: ""
	I1206 10:37:08.199248  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.199255  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:08.199260  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:08.199326  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:08.231753  528268 cri.go:89] found id: ""
	I1206 10:37:08.231767  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.231774  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:08.231780  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:08.231842  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:08.260152  528268 cri.go:89] found id: ""
	I1206 10:37:08.260166  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.260173  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:08.260179  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:08.260241  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:08.285346  528268 cri.go:89] found id: ""
	I1206 10:37:08.285360  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.285367  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:08.285378  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:08.285388  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:08.353719  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:08.353740  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:08.385085  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:08.385101  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:08.459734  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:08.459762  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:08.474846  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:08.474862  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:08.546432  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:08.537844   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.538577   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.540294   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.540933   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.542525   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:08.537844   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.538577   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.540294   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.540933   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.542525   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:11.048129  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:11.058654  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:11.058714  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:11.086873  528268 cri.go:89] found id: ""
	I1206 10:37:11.086889  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.086896  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:11.086903  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:11.086965  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:11.113880  528268 cri.go:89] found id: ""
	I1206 10:37:11.113904  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.113912  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:11.113918  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:11.113987  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:11.142338  528268 cri.go:89] found id: ""
	I1206 10:37:11.142361  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.142370  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:11.142375  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:11.142448  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:11.168341  528268 cri.go:89] found id: ""
	I1206 10:37:11.168355  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.168362  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:11.168368  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:11.168425  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:11.218236  528268 cri.go:89] found id: ""
	I1206 10:37:11.218277  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.218285  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:11.218290  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:11.218357  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:11.257366  528268 cri.go:89] found id: ""
	I1206 10:37:11.257379  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.257386  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:11.257391  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:11.257455  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:11.283202  528268 cri.go:89] found id: ""
	I1206 10:37:11.283224  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.283235  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:11.283251  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:11.283269  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:11.349630  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:11.349650  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:11.365578  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:11.365606  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:11.431959  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:11.422904   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.423556   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.425277   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.425941   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.427652   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:11.422904   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.423556   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.425277   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.425941   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.427652   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:11.431970  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:11.431981  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:11.502903  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:11.502922  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:14.032953  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:14.043177  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:14.043291  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:14.068855  528268 cri.go:89] found id: ""
	I1206 10:37:14.068870  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.068877  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:14.068882  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:14.068946  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:14.094277  528268 cri.go:89] found id: ""
	I1206 10:37:14.094290  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.094308  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:14.094315  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:14.094372  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:14.119916  528268 cri.go:89] found id: ""
	I1206 10:37:14.119930  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.119948  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:14.119954  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:14.120029  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:14.144999  528268 cri.go:89] found id: ""
	I1206 10:37:14.145012  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.145020  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:14.145026  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:14.145088  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:14.170372  528268 cri.go:89] found id: ""
	I1206 10:37:14.170386  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.170404  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:14.170409  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:14.170475  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:14.220015  528268 cri.go:89] found id: ""
	I1206 10:37:14.220029  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.220036  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:14.220041  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:14.220102  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:14.249187  528268 cri.go:89] found id: ""
	I1206 10:37:14.249201  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.249208  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:14.249216  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:14.249226  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:14.315809  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:14.315830  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:14.331228  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:14.331245  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:14.394665  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:14.386558   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.387326   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.388992   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.389309   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.390775   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:14.386558   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.387326   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.388992   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.389309   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.390775   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:14.394676  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:14.394686  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:14.466599  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:14.466623  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:16.996304  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:17.008394  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:17.008453  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:17.036500  528268 cri.go:89] found id: ""
	I1206 10:37:17.036513  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.036521  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:17.036526  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:17.036591  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:17.064759  528268 cri.go:89] found id: ""
	I1206 10:37:17.064773  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.064780  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:17.064785  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:17.064846  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:17.095263  528268 cri.go:89] found id: ""
	I1206 10:37:17.095276  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.095284  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:17.095300  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:17.095364  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:17.121651  528268 cri.go:89] found id: ""
	I1206 10:37:17.121665  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.121673  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:17.121678  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:17.121747  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:17.148683  528268 cri.go:89] found id: ""
	I1206 10:37:17.148697  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.148704  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:17.148711  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:17.148773  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:17.180504  528268 cri.go:89] found id: ""
	I1206 10:37:17.180518  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.180535  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:17.180542  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:17.180611  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:17.208816  528268 cri.go:89] found id: ""
	I1206 10:37:17.208830  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.208837  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:17.208844  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:17.208854  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:17.277798  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:17.277818  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:17.292728  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:17.292743  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:17.366791  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:17.357858   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.358712   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.360589   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.361199   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.362779   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:17.357858   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.358712   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.360589   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.361199   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.362779   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:17.366801  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:17.366812  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:17.434192  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:17.434212  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:19.971273  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:19.981226  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:19.981286  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:20.019762  528268 cri.go:89] found id: ""
	I1206 10:37:20.019777  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.019785  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:20.019791  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:20.019866  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:20.047256  528268 cri.go:89] found id: ""
	I1206 10:37:20.047270  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.047278  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:20.047283  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:20.047345  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:20.075694  528268 cri.go:89] found id: ""
	I1206 10:37:20.075708  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.075716  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:20.075721  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:20.075785  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:20.105896  528268 cri.go:89] found id: ""
	I1206 10:37:20.105910  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.105917  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:20.105922  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:20.105981  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:20.131910  528268 cri.go:89] found id: ""
	I1206 10:37:20.131923  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.131930  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:20.131935  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:20.131997  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:20.157115  528268 cri.go:89] found id: ""
	I1206 10:37:20.157129  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.157135  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:20.157140  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:20.157202  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:20.188374  528268 cri.go:89] found id: ""
	I1206 10:37:20.188394  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.188401  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:20.188423  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:20.188434  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:20.267587  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:20.267607  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:20.283222  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:20.283238  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:20.348772  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:20.340427   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.341070   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.342551   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.342988   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.344527   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:20.340427   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.341070   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.342551   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.342988   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.344527   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:20.348783  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:20.348796  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:20.415451  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:20.415474  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:22.948223  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:22.959160  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:22.959221  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:22.985131  528268 cri.go:89] found id: ""
	I1206 10:37:22.985144  528268 logs.go:282] 0 containers: []
	W1206 10:37:22.985151  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:22.985156  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:22.985242  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:23.012336  528268 cri.go:89] found id: ""
	I1206 10:37:23.012350  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.012358  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:23.012363  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:23.012433  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:23.037784  528268 cri.go:89] found id: ""
	I1206 10:37:23.037808  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.037816  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:23.037822  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:23.037899  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:23.066240  528268 cri.go:89] found id: ""
	I1206 10:37:23.066254  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.066262  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:23.066267  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:23.066335  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:23.090898  528268 cri.go:89] found id: ""
	I1206 10:37:23.090912  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.090921  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:23.090926  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:23.090993  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:23.116011  528268 cri.go:89] found id: ""
	I1206 10:37:23.116039  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.116047  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:23.116052  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:23.116127  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:23.140768  528268 cri.go:89] found id: ""
	I1206 10:37:23.140781  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.140788  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:23.140796  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:23.140806  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:23.210300  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:23.210319  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:23.229296  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:23.229311  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:23.297415  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:23.288972   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.289757   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.291364   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.291944   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.293619   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:23.288972   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.289757   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.291364   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.291944   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.293619   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:23.297428  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:23.297438  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:23.364180  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:23.364200  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:25.892120  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:25.902322  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:25.902381  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:25.931154  528268 cri.go:89] found id: ""
	I1206 10:37:25.931168  528268 logs.go:282] 0 containers: []
	W1206 10:37:25.931175  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:25.931180  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:25.931245  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:25.957709  528268 cri.go:89] found id: ""
	I1206 10:37:25.957724  528268 logs.go:282] 0 containers: []
	W1206 10:37:25.957731  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:25.957736  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:25.957793  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:25.985765  528268 cri.go:89] found id: ""
	I1206 10:37:25.985779  528268 logs.go:282] 0 containers: []
	W1206 10:37:25.985786  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:25.985791  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:25.985849  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:26.016739  528268 cri.go:89] found id: ""
	I1206 10:37:26.016859  528268 logs.go:282] 0 containers: []
	W1206 10:37:26.016867  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:26.016873  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:26.016945  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:26.043228  528268 cri.go:89] found id: ""
	I1206 10:37:26.043242  528268 logs.go:282] 0 containers: []
	W1206 10:37:26.043252  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:26.043258  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:26.043331  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:26.069862  528268 cri.go:89] found id: ""
	I1206 10:37:26.069888  528268 logs.go:282] 0 containers: []
	W1206 10:37:26.069896  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:26.069902  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:26.069979  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:26.097635  528268 cri.go:89] found id: ""
	I1206 10:37:26.097651  528268 logs.go:282] 0 containers: []
	W1206 10:37:26.097659  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:26.097666  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:26.097677  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:26.163107  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:26.163132  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:26.177703  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:26.177723  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:26.254904  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:26.246698   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.247514   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.249003   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.249473   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.250911   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:26.246698   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.247514   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.249003   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.249473   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.250911   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:26.254915  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:26.254927  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:26.322703  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:26.322723  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:28.850178  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:28.860819  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:28.860878  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:28.887162  528268 cri.go:89] found id: ""
	I1206 10:37:28.887175  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.887183  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:28.887188  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:28.887246  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:28.912223  528268 cri.go:89] found id: ""
	I1206 10:37:28.912237  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.912251  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:28.912256  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:28.912318  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:28.937893  528268 cri.go:89] found id: ""
	I1206 10:37:28.937907  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.937914  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:28.937920  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:28.937979  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:28.966798  528268 cri.go:89] found id: ""
	I1206 10:37:28.966812  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.966819  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:28.966825  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:28.966887  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:28.994392  528268 cri.go:89] found id: ""
	I1206 10:37:28.994406  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.994413  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:28.994418  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:28.994480  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:29.020703  528268 cri.go:89] found id: ""
	I1206 10:37:29.020718  528268 logs.go:282] 0 containers: []
	W1206 10:37:29.020725  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:29.020730  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:29.020788  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:29.049956  528268 cri.go:89] found id: ""
	I1206 10:37:29.049969  528268 logs.go:282] 0 containers: []
	W1206 10:37:29.049977  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:29.049986  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:29.049998  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:29.116113  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:29.116133  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:29.130937  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:29.130954  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:29.199649  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:29.191077   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.191848   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.193554   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.193889   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.195340   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:29.191077   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.191848   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.193554   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.193889   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.195340   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:29.199659  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:29.199670  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:29.271990  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:29.272011  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:31.801925  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:31.812057  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:31.812130  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:31.837642  528268 cri.go:89] found id: ""
	I1206 10:37:31.837656  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.837663  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:31.837668  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:31.837724  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:31.863706  528268 cri.go:89] found id: ""
	I1206 10:37:31.863721  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.863728  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:31.863733  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:31.863795  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:31.892284  528268 cri.go:89] found id: ""
	I1206 10:37:31.892298  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.892305  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:31.892310  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:31.892370  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:31.920973  528268 cri.go:89] found id: ""
	I1206 10:37:31.920987  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.920994  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:31.920999  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:31.921072  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:31.946196  528268 cri.go:89] found id: ""
	I1206 10:37:31.946209  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.946216  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:31.946221  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:31.946280  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:31.972154  528268 cri.go:89] found id: ""
	I1206 10:37:31.972168  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.972176  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:31.972182  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:31.972273  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:31.998166  528268 cri.go:89] found id: ""
	I1206 10:37:31.998179  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.998194  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:31.998202  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:31.998212  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:32.066002  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:32.066020  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:32.081440  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:32.081456  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:32.155010  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:32.146683   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.147230   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.149014   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.149511   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.151065   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:32.146683   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.147230   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.149014   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.149511   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.151065   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:32.155021  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:32.155032  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:32.239005  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:32.239035  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:34.779578  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:34.789994  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:34.790061  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:34.817069  528268 cri.go:89] found id: ""
	I1206 10:37:34.817083  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.817091  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:34.817096  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:34.817154  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:34.843456  528268 cri.go:89] found id: ""
	I1206 10:37:34.843470  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.843478  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:34.843483  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:34.843540  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:34.873150  528268 cri.go:89] found id: ""
	I1206 10:37:34.873164  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.873171  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:34.873176  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:34.873236  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:34.901463  528268 cri.go:89] found id: ""
	I1206 10:37:34.901476  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.901483  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:34.901489  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:34.901546  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:34.930362  528268 cri.go:89] found id: ""
	I1206 10:37:34.930376  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.930383  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:34.930389  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:34.930460  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:34.955907  528268 cri.go:89] found id: ""
	I1206 10:37:34.955920  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.955928  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:34.955936  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:34.955997  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:34.981646  528268 cri.go:89] found id: ""
	I1206 10:37:34.981660  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.981667  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:34.981676  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:34.981690  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:35.051925  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:35.051946  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:35.067379  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:35.067395  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:35.132911  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:35.124444   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.125082   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.126771   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.127367   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.128903   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:35.124444   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.125082   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.126771   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.127367   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.128903   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:35.132921  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:35.132932  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:35.203071  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:35.203091  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:37.738787  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:37.749325  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:37.749395  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:37.777933  528268 cri.go:89] found id: ""
	I1206 10:37:37.777947  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.777955  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:37.777961  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:37.778018  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:37.803626  528268 cri.go:89] found id: ""
	I1206 10:37:37.803640  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.803647  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:37.803652  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:37.803711  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:37.829518  528268 cri.go:89] found id: ""
	I1206 10:37:37.829532  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.829540  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:37.829545  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:37.829608  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:37.854832  528268 cri.go:89] found id: ""
	I1206 10:37:37.854846  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.854853  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:37.854858  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:37.854918  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:37.879627  528268 cri.go:89] found id: ""
	I1206 10:37:37.879641  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.879649  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:37.879654  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:37.879712  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:37.906054  528268 cri.go:89] found id: ""
	I1206 10:37:37.906067  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.906074  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:37.906080  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:37.906137  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:37.931611  528268 cri.go:89] found id: ""
	I1206 10:37:37.931624  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.931632  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:37.931640  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:37.931651  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:37.997740  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:37.997760  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:38.023284  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:38.023303  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:38.091986  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:38.082741   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.083460   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.085430   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.086101   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.087877   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:38.082741   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.083460   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.085430   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.086101   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.087877   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:38.092014  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:38.092027  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:38.163320  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:38.163343  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:40.709445  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:40.720016  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:40.720077  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:40.745539  528268 cri.go:89] found id: ""
	I1206 10:37:40.745554  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.745561  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:40.745566  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:40.745630  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:40.775524  528268 cri.go:89] found id: ""
	I1206 10:37:40.775538  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.775546  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:40.775552  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:40.775612  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:40.800974  528268 cri.go:89] found id: ""
	I1206 10:37:40.800988  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.800995  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:40.801001  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:40.801064  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:40.825855  528268 cri.go:89] found id: ""
	I1206 10:37:40.825869  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.825877  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:40.825882  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:40.825940  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:40.856039  528268 cri.go:89] found id: ""
	I1206 10:37:40.856052  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.856059  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:40.856064  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:40.856129  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:40.886499  528268 cri.go:89] found id: ""
	I1206 10:37:40.886513  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.886520  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:40.886527  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:40.886586  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:40.913975  528268 cri.go:89] found id: ""
	I1206 10:37:40.913989  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.913996  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:40.914004  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:40.914014  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:40.979882  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:40.979904  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:40.995137  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:40.995155  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:41.060228  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:41.051325   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.052002   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.053633   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.054141   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.055869   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:41.051325   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.052002   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.053633   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.054141   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.055869   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:41.060245  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:41.060258  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:41.130025  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:41.130046  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:43.659238  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:43.669354  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:43.669430  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:43.694872  528268 cri.go:89] found id: ""
	I1206 10:37:43.694886  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.694893  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:43.694899  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:43.694956  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:43.720265  528268 cri.go:89] found id: ""
	I1206 10:37:43.720278  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.720286  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:43.720290  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:43.720349  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:43.746213  528268 cri.go:89] found id: ""
	I1206 10:37:43.746226  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.746234  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:43.746239  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:43.746300  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:43.771902  528268 cri.go:89] found id: ""
	I1206 10:37:43.771916  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.771923  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:43.771928  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:43.771984  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:43.797840  528268 cri.go:89] found id: ""
	I1206 10:37:43.797854  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.797874  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:43.797879  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:43.797949  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:43.823569  528268 cri.go:89] found id: ""
	I1206 10:37:43.823583  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.823590  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:43.823596  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:43.823654  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:43.850154  528268 cri.go:89] found id: ""
	I1206 10:37:43.850169  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.850187  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:43.850196  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:43.850207  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:43.919668  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:43.919690  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:43.954253  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:43.954269  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:44.019533  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:44.019556  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:44.034911  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:44.034930  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:44.098130  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:44.089450   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.090461   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.091451   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.092313   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.093171   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:44.089450   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.090461   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.091451   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.092313   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.093171   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:46.599796  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:46.610343  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:46.610410  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:46.637289  528268 cri.go:89] found id: ""
	I1206 10:37:46.637304  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.637311  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:46.637317  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:46.637380  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:46.664098  528268 cri.go:89] found id: ""
	I1206 10:37:46.664112  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.664118  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:46.664123  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:46.664183  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:46.693606  528268 cri.go:89] found id: ""
	I1206 10:37:46.693619  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.693638  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:46.693644  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:46.693718  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:46.719425  528268 cri.go:89] found id: ""
	I1206 10:37:46.719438  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.719445  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:46.719451  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:46.719511  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:46.748960  528268 cri.go:89] found id: ""
	I1206 10:37:46.748974  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.748982  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:46.748987  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:46.749047  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:46.782749  528268 cri.go:89] found id: ""
	I1206 10:37:46.782763  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.782770  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:46.782776  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:46.782846  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:46.807615  528268 cri.go:89] found id: ""
	I1206 10:37:46.807629  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.807636  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:46.807644  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:46.807654  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:46.838618  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:46.838634  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:46.905518  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:46.905537  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:46.920399  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:46.920417  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:46.985957  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:46.978179   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.978741   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.980269   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.980715   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.982218   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:46.978179   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.978741   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.980269   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.980715   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.982218   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:46.985968  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:46.985981  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:49.555258  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:49.565209  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:49.565266  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:49.593833  528268 cri.go:89] found id: ""
	I1206 10:37:49.593846  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.593853  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:49.593858  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:49.593914  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:49.621098  528268 cri.go:89] found id: ""
	I1206 10:37:49.621111  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.621119  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:49.621124  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:49.621203  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:49.645669  528268 cri.go:89] found id: ""
	I1206 10:37:49.645681  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.645689  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:49.645694  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:49.645750  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:49.672058  528268 cri.go:89] found id: ""
	I1206 10:37:49.672072  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.672080  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:49.672085  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:49.672140  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:49.696988  528268 cri.go:89] found id: ""
	I1206 10:37:49.697002  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.697009  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:49.697015  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:49.697076  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:49.723261  528268 cri.go:89] found id: ""
	I1206 10:37:49.723275  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.723282  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:49.723287  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:49.723357  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:49.750307  528268 cri.go:89] found id: ""
	I1206 10:37:49.750321  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.750328  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:49.750336  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:49.750346  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:49.765699  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:49.765721  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:49.827929  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:49.819281   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.820177   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.821896   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.822193   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.823677   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:49.819281   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.820177   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.821896   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.822193   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.823677   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:49.827938  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:49.827962  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:49.899802  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:49.899820  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:49.928018  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:49.928035  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:52.495744  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:52.505888  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:52.505958  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:52.532610  528268 cri.go:89] found id: ""
	I1206 10:37:52.532623  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.532631  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:52.532636  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:52.532695  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:52.558679  528268 cri.go:89] found id: ""
	I1206 10:37:52.558692  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.558700  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:52.558705  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:52.558762  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:52.585203  528268 cri.go:89] found id: ""
	I1206 10:37:52.585217  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.585225  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:52.585230  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:52.585286  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:52.611483  528268 cri.go:89] found id: ""
	I1206 10:37:52.611496  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.611503  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:52.611510  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:52.611568  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:52.638054  528268 cri.go:89] found id: ""
	I1206 10:37:52.638067  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.638075  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:52.638080  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:52.638137  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:52.666746  528268 cri.go:89] found id: ""
	I1206 10:37:52.666760  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.666767  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:52.666773  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:52.666833  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:52.691974  528268 cri.go:89] found id: ""
	I1206 10:37:52.691997  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.692005  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:52.692015  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:52.692025  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:52.761093  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:52.761113  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:52.790376  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:52.790392  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:52.858897  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:52.858915  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:52.873906  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:52.873923  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:52.937907  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:52.929773   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.930648   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.932194   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.932561   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.934055   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:52.929773   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.930648   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.932194   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.932561   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.934055   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:55.439279  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:55.450466  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:55.450529  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:55.483494  528268 cri.go:89] found id: ""
	I1206 10:37:55.483508  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.483515  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:55.483520  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:55.483576  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:55.515860  528268 cri.go:89] found id: ""
	I1206 10:37:55.515874  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.515881  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:55.515886  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:55.515942  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:55.542224  528268 cri.go:89] found id: ""
	I1206 10:37:55.542239  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.542248  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:55.542253  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:55.542311  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:55.567547  528268 cri.go:89] found id: ""
	I1206 10:37:55.567561  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.567568  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:55.567574  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:55.567630  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:55.594478  528268 cri.go:89] found id: ""
	I1206 10:37:55.594491  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.594499  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:55.594505  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:55.594568  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:55.620118  528268 cri.go:89] found id: ""
	I1206 10:37:55.620132  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.620146  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:55.620151  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:55.620210  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:55.644692  528268 cri.go:89] found id: ""
	I1206 10:37:55.644706  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.644713  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:55.644721  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:55.644732  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:55.712056  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:55.702146   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.702755   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.704324   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.704667   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.708009   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:55.702146   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.702755   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.704324   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.704667   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.708009   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:55.712075  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:55.712085  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:55.782393  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:55.782414  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:55.817896  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:55.817913  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:55.892357  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:55.892385  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:58.407847  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:58.417968  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:58.418026  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:58.446859  528268 cri.go:89] found id: ""
	I1206 10:37:58.446872  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.446879  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:58.446884  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:58.446946  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:58.475161  528268 cri.go:89] found id: ""
	I1206 10:37:58.475175  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.475182  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:58.475187  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:58.475244  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:58.503498  528268 cri.go:89] found id: ""
	I1206 10:37:58.503513  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.503520  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:58.503525  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:58.503583  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:58.529955  528268 cri.go:89] found id: ""
	I1206 10:37:58.529970  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.529977  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:58.529983  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:58.530038  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:58.557174  528268 cri.go:89] found id: ""
	I1206 10:37:58.557188  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.557196  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:58.557201  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:58.557259  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:58.586116  528268 cri.go:89] found id: ""
	I1206 10:37:58.586130  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.586149  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:58.586156  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:58.586211  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:58.620339  528268 cri.go:89] found id: ""
	I1206 10:37:58.620353  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.620361  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:58.620368  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:58.620379  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:58.686086  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:58.686105  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:58.700471  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:58.700487  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:58.772759  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:58.764751   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.765482   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.767041   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.767492   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.769066   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:58.764751   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.765482   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.767041   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.767492   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.769066   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:58.772768  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:58.772779  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:58.841699  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:58.841718  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:01.372136  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:01.382712  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:01.382776  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:01.410577  528268 cri.go:89] found id: ""
	I1206 10:38:01.410591  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.410598  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:01.410603  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:01.410666  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:01.444228  528268 cri.go:89] found id: ""
	I1206 10:38:01.444251  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.444258  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:01.444264  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:01.444331  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:01.486632  528268 cri.go:89] found id: ""
	I1206 10:38:01.486645  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.486652  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:01.486657  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:01.486717  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:01.518190  528268 cri.go:89] found id: ""
	I1206 10:38:01.518203  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.518210  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:01.518215  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:01.518276  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:01.543942  528268 cri.go:89] found id: ""
	I1206 10:38:01.543956  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.543963  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:01.543968  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:01.544032  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:01.569769  528268 cri.go:89] found id: ""
	I1206 10:38:01.569803  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.569832  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:01.569845  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:01.569902  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:01.594441  528268 cri.go:89] found id: ""
	I1206 10:38:01.594456  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.594463  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:01.594471  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:01.594482  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:01.609124  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:01.609139  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:01.671291  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:01.663080   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.663834   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.665465   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.665773   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.667299   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:01.663080   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.663834   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.665465   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.665773   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.667299   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:01.671302  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:01.671312  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:01.739749  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:01.739769  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:01.768671  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:01.768687  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:04.339038  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:04.349363  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:04.349432  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:04.375032  528268 cri.go:89] found id: ""
	I1206 10:38:04.375045  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.375052  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:04.375058  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:04.375139  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:04.399997  528268 cri.go:89] found id: ""
	I1206 10:38:04.400011  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.400018  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:04.400023  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:04.400081  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:04.424851  528268 cri.go:89] found id: ""
	I1206 10:38:04.424876  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.424884  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:04.424889  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:04.424959  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:04.453149  528268 cri.go:89] found id: ""
	I1206 10:38:04.453162  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.453170  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:04.453175  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:04.453263  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:04.483514  528268 cri.go:89] found id: ""
	I1206 10:38:04.483527  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.483534  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:04.483540  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:04.483598  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:04.511967  528268 cri.go:89] found id: ""
	I1206 10:38:04.511980  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.511987  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:04.511993  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:04.512048  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:04.541164  528268 cri.go:89] found id: ""
	I1206 10:38:04.541175  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.541182  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:04.541190  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:04.541199  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:04.575975  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:04.575991  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:04.642763  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:04.642781  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:04.657313  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:04.657336  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:04.721928  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:04.713076   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.713820   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.715564   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.716200   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.717981   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:04.713076   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.713820   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.715564   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.716200   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.717981   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:04.721939  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:04.721952  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:07.293453  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:07.303645  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:07.303708  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:07.329285  528268 cri.go:89] found id: ""
	I1206 10:38:07.329299  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.329306  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:07.329313  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:07.329371  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:07.354889  528268 cri.go:89] found id: ""
	I1206 10:38:07.354903  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.354911  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:07.354916  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:07.354975  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:07.380496  528268 cri.go:89] found id: ""
	I1206 10:38:07.380510  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.380518  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:07.380523  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:07.380583  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:07.408252  528268 cri.go:89] found id: ""
	I1206 10:38:07.408265  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.408272  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:07.408278  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:07.408341  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:07.434563  528268 cri.go:89] found id: ""
	I1206 10:38:07.434577  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.434584  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:07.434590  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:07.434656  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:07.465668  528268 cri.go:89] found id: ""
	I1206 10:38:07.465681  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.465688  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:07.465694  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:07.465755  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:07.496206  528268 cri.go:89] found id: ""
	I1206 10:38:07.496220  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.496227  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:07.496252  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:07.496291  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:07.561228  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:07.561250  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:07.576434  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:07.576450  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:07.645534  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:07.637588   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.638151   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.639755   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.640208   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.641673   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:07.637588   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.638151   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.639755   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.640208   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.641673   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:07.645544  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:07.645555  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:07.713688  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:07.713708  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:10.250054  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:10.260518  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:10.260577  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:10.287264  528268 cri.go:89] found id: ""
	I1206 10:38:10.287283  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.287291  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:10.287296  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:10.287358  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:10.312333  528268 cri.go:89] found id: ""
	I1206 10:38:10.312347  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.312355  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:10.312360  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:10.312420  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:10.336978  528268 cri.go:89] found id: ""
	I1206 10:38:10.336993  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.337000  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:10.337004  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:10.337069  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:10.363441  528268 cri.go:89] found id: ""
	I1206 10:38:10.363455  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.363463  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:10.363468  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:10.363526  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:10.388225  528268 cri.go:89] found id: ""
	I1206 10:38:10.388245  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.388253  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:10.388259  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:10.388320  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:10.414362  528268 cri.go:89] found id: ""
	I1206 10:38:10.414375  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.414382  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:10.414388  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:10.414445  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:10.454478  528268 cri.go:89] found id: ""
	I1206 10:38:10.454491  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.454499  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:10.454508  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:10.454518  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:10.524830  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:10.524851  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:10.540277  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:10.540292  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:10.607931  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:10.599410   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.600137   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.601764   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.602052   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.604157   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:10.599410   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.600137   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.601764   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.602052   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.604157   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:10.607942  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:10.607955  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:10.675104  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:10.675134  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:13.206837  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:13.217943  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:13.218002  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:13.243670  528268 cri.go:89] found id: ""
	I1206 10:38:13.243684  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.243691  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:13.243697  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:13.243758  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:13.268428  528268 cri.go:89] found id: ""
	I1206 10:38:13.268443  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.268450  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:13.268455  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:13.268512  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:13.294024  528268 cri.go:89] found id: ""
	I1206 10:38:13.294038  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.294045  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:13.294050  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:13.294106  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:13.321522  528268 cri.go:89] found id: ""
	I1206 10:38:13.321536  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.321543  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:13.321548  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:13.321610  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:13.351214  528268 cri.go:89] found id: ""
	I1206 10:38:13.351228  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.351235  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:13.351240  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:13.351299  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:13.376433  528268 cri.go:89] found id: ""
	I1206 10:38:13.376447  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.376454  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:13.376459  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:13.376520  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:13.405980  528268 cri.go:89] found id: ""
	I1206 10:38:13.405994  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.406001  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:13.406009  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:13.406019  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:13.481314  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:13.481334  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:13.503361  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:13.503378  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:13.570756  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:13.562069   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.562777   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.564575   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.565306   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.566790   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:13.562069   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.562777   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.564575   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.565306   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.566790   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:13.570765  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:13.570778  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:13.641258  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:13.641282  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:16.171913  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:16.182483  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:16.182545  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:16.210129  528268 cri.go:89] found id: ""
	I1206 10:38:16.210143  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.210151  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:16.210156  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:16.210217  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:16.237040  528268 cri.go:89] found id: ""
	I1206 10:38:16.237060  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.237067  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:16.237073  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:16.237134  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:16.263801  528268 cri.go:89] found id: ""
	I1206 10:38:16.263815  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.263822  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:16.263827  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:16.263886  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:16.289263  528268 cri.go:89] found id: ""
	I1206 10:38:16.289277  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.289284  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:16.289289  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:16.289347  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:16.317849  528268 cri.go:89] found id: ""
	I1206 10:38:16.317862  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.317870  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:16.317875  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:16.317933  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:16.347303  528268 cri.go:89] found id: ""
	I1206 10:38:16.347317  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.347324  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:16.347329  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:16.347387  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:16.373512  528268 cri.go:89] found id: ""
	I1206 10:38:16.373525  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.373542  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:16.373552  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:16.373568  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:16.438751  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:16.438769  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:16.455447  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:16.455463  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:16.527176  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:16.518992   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.519800   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.521522   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.522056   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.523116   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:16.518992   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.519800   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.521522   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.522056   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.523116   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:16.527186  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:16.527196  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:16.595033  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:16.595053  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:19.127162  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:19.137626  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:19.137685  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:19.168715  528268 cri.go:89] found id: ""
	I1206 10:38:19.168729  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.168736  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:19.168741  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:19.168798  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:19.199324  528268 cri.go:89] found id: ""
	I1206 10:38:19.199341  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.199354  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:19.199359  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:19.199418  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:19.225589  528268 cri.go:89] found id: ""
	I1206 10:38:19.225601  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.225608  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:19.225613  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:19.225670  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:19.251399  528268 cri.go:89] found id: ""
	I1206 10:38:19.251412  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.251420  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:19.251425  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:19.251488  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:19.276108  528268 cri.go:89] found id: ""
	I1206 10:38:19.276122  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.276129  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:19.276134  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:19.276193  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:19.301269  528268 cri.go:89] found id: ""
	I1206 10:38:19.301282  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.301290  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:19.301295  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:19.301352  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:19.327537  528268 cri.go:89] found id: ""
	I1206 10:38:19.327552  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.327559  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:19.327568  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:19.327578  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:19.398088  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:19.398114  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:19.413590  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:19.413609  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:19.517843  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:19.509322   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.509746   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.511448   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.511962   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.513543   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:19.509322   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.509746   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.511448   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.511962   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.513543   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:19.517853  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:19.517866  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:19.587464  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:19.587485  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:22.115984  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:22.126048  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:22.126111  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:22.152880  528268 cri.go:89] found id: ""
	I1206 10:38:22.152893  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.152900  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:22.152905  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:22.152961  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:22.179175  528268 cri.go:89] found id: ""
	I1206 10:38:22.179190  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.179197  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:22.179202  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:22.179263  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:22.204543  528268 cri.go:89] found id: ""
	I1206 10:38:22.204557  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.204565  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:22.204570  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:22.204631  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:22.229269  528268 cri.go:89] found id: ""
	I1206 10:38:22.229283  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.229291  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:22.229296  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:22.229353  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:22.255404  528268 cri.go:89] found id: ""
	I1206 10:38:22.255418  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.255425  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:22.255430  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:22.255488  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:22.280965  528268 cri.go:89] found id: ""
	I1206 10:38:22.280981  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.280988  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:22.280994  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:22.281052  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:22.309901  528268 cri.go:89] found id: ""
	I1206 10:38:22.309915  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.309922  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:22.309930  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:22.309940  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:22.382110  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:22.382130  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:22.412045  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:22.412060  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:22.485902  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:22.485921  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:22.501637  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:22.501655  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:22.572937  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:22.565172   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.565547   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.567025   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.567515   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.569137   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:22.565172   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.565547   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.567025   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.567515   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.569137   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:25.074598  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:25.085017  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:25.085084  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:25.110479  528268 cri.go:89] found id: ""
	I1206 10:38:25.110493  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.110500  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:25.110506  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:25.110566  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:25.137467  528268 cri.go:89] found id: ""
	I1206 10:38:25.137481  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.137488  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:25.137493  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:25.137552  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:25.163017  528268 cri.go:89] found id: ""
	I1206 10:38:25.163033  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.163040  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:25.163046  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:25.163105  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:25.193876  528268 cri.go:89] found id: ""
	I1206 10:38:25.193890  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.193898  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:25.193903  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:25.193966  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:25.220362  528268 cri.go:89] found id: ""
	I1206 10:38:25.220376  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.220383  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:25.220388  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:25.220444  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:25.246057  528268 cri.go:89] found id: ""
	I1206 10:38:25.246070  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.246078  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:25.246083  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:25.246140  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:25.273646  528268 cri.go:89] found id: ""
	I1206 10:38:25.273660  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.273667  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:25.273675  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:25.273691  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:25.341507  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:25.341527  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:25.356890  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:25.356906  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:25.432607  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:25.423528   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.424336   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.425943   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.426718   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.428396   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:25.423528   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.424336   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.425943   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.426718   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.428396   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:25.432617  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:25.432628  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:25.515030  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:25.515052  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:28.053670  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:28.064577  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:28.064641  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:28.091082  528268 cri.go:89] found id: ""
	I1206 10:38:28.091097  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.091106  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:28.091111  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:28.091205  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:28.116793  528268 cri.go:89] found id: ""
	I1206 10:38:28.116808  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.116815  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:28.116822  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:28.116881  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:28.145938  528268 cri.go:89] found id: ""
	I1206 10:38:28.145952  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.145960  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:28.145965  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:28.146025  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:28.171742  528268 cri.go:89] found id: ""
	I1206 10:38:28.171755  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.171763  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:28.171768  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:28.171826  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:28.197528  528268 cri.go:89] found id: ""
	I1206 10:38:28.197542  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.197549  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:28.197554  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:28.197613  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:28.224277  528268 cri.go:89] found id: ""
	I1206 10:38:28.224291  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.224298  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:28.224303  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:28.224368  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:28.252201  528268 cri.go:89] found id: ""
	I1206 10:38:28.252215  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.252223  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:28.252237  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:28.252248  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:28.284626  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:28.284642  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:28.351035  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:28.351055  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:28.366043  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:28.366061  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:28.437473  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:28.427946   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.428958   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.430082   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.430867   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.432711   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:28.427946   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.428958   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.430082   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.430867   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.432711   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:28.437483  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:28.437506  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:31.019982  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:31.030426  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:31.030488  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:31.055406  528268 cri.go:89] found id: ""
	I1206 10:38:31.055419  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.055427  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:31.055432  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:31.055490  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:31.081639  528268 cri.go:89] found id: ""
	I1206 10:38:31.081653  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.081660  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:31.081665  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:31.081729  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:31.111871  528268 cri.go:89] found id: ""
	I1206 10:38:31.111886  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.111894  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:31.111899  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:31.111959  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:31.142949  528268 cri.go:89] found id: ""
	I1206 10:38:31.142964  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.142971  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:31.142977  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:31.143042  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:31.169930  528268 cri.go:89] found id: ""
	I1206 10:38:31.169946  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.169954  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:31.169959  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:31.170020  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:31.196019  528268 cri.go:89] found id: ""
	I1206 10:38:31.196033  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.196041  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:31.196046  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:31.196104  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:31.226526  528268 cri.go:89] found id: ""
	I1206 10:38:31.226540  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.226547  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:31.226556  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:31.226567  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:31.289723  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:31.280542   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.281325   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.283214   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.283972   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.285734   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:31.280542   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.281325   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.283214   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.283972   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.285734   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:31.289733  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:31.289746  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:31.358922  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:31.358941  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:31.387252  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:31.387268  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:31.460730  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:31.460749  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:33.977403  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:33.987866  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:33.987933  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:34.023637  528268 cri.go:89] found id: ""
	I1206 10:38:34.023651  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.023659  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:34.023664  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:34.023728  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:34.052242  528268 cri.go:89] found id: ""
	I1206 10:38:34.052256  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.052263  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:34.052269  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:34.052330  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:34.077707  528268 cri.go:89] found id: ""
	I1206 10:38:34.077721  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.077728  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:34.077734  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:34.077795  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:34.103066  528268 cri.go:89] found id: ""
	I1206 10:38:34.103079  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.103098  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:34.103103  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:34.103185  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:34.132994  528268 cri.go:89] found id: ""
	I1206 10:38:34.133007  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.133015  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:34.133020  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:34.133081  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:34.159017  528268 cri.go:89] found id: ""
	I1206 10:38:34.159030  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.159038  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:34.159043  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:34.159101  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:34.185998  528268 cri.go:89] found id: ""
	I1206 10:38:34.186012  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.186020  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:34.186028  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:34.186042  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:34.257644  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:34.257664  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:34.273073  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:34.273092  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:34.344235  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:34.334637   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.335521   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.337181   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.337760   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.339605   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:34.334637   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.335521   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.337181   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.337760   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.339605   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:34.344247  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:34.344260  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:34.414848  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:34.414867  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:36.966180  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:36.976392  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:36.976457  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:37.002549  528268 cri.go:89] found id: ""
	I1206 10:38:37.002566  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.002574  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:37.002580  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:37.002657  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:37.033009  528268 cri.go:89] found id: ""
	I1206 10:38:37.033024  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.033031  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:37.033037  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:37.033106  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:37.059257  528268 cri.go:89] found id: ""
	I1206 10:38:37.059271  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.059279  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:37.059285  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:37.059346  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:37.090436  528268 cri.go:89] found id: ""
	I1206 10:38:37.090449  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.090457  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:37.090462  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:37.090523  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:37.118194  528268 cri.go:89] found id: ""
	I1206 10:38:37.118208  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.118215  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:37.118222  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:37.118284  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:37.144022  528268 cri.go:89] found id: ""
	I1206 10:38:37.144036  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.144044  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:37.144049  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:37.144107  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:37.168416  528268 cri.go:89] found id: ""
	I1206 10:38:37.168430  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.168438  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:37.168445  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:37.168456  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:37.234878  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:37.234898  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:37.250351  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:37.250374  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:37.316139  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:37.307238   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.308163   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.309976   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.310399   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.312153   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:37.307238   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.308163   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.309976   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.310399   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.312153   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:37.316149  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:37.316159  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:37.385780  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:37.385800  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:39.916327  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:39.926345  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:39.926412  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:39.953639  528268 cri.go:89] found id: ""
	I1206 10:38:39.953652  528268 logs.go:282] 0 containers: []
	W1206 10:38:39.953660  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:39.953671  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:39.953732  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:39.979049  528268 cri.go:89] found id: ""
	I1206 10:38:39.979064  528268 logs.go:282] 0 containers: []
	W1206 10:38:39.979072  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:39.979077  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:39.979164  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:40.013684  528268 cri.go:89] found id: ""
	I1206 10:38:40.013700  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.013708  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:40.013714  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:40.013783  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:40.052804  528268 cri.go:89] found id: ""
	I1206 10:38:40.052820  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.052828  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:40.052834  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:40.052902  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:40.084356  528268 cri.go:89] found id: ""
	I1206 10:38:40.084372  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.084380  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:40.084386  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:40.084451  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:40.112282  528268 cri.go:89] found id: ""
	I1206 10:38:40.112297  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.112304  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:40.112312  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:40.112373  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:40.140065  528268 cri.go:89] found id: ""
	I1206 10:38:40.140080  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.140087  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:40.140094  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:40.140108  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:40.208521  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:40.199450   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.200296   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.202102   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.202795   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.204574   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:40.199450   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.200296   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.202102   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.202795   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.204574   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:40.208530  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:40.208541  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:40.280105  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:40.280126  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:40.313393  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:40.313409  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:40.380769  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:40.380789  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:42.896735  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:42.906913  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:42.906971  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:42.932466  528268 cri.go:89] found id: ""
	I1206 10:38:42.932480  528268 logs.go:282] 0 containers: []
	W1206 10:38:42.932493  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:42.932499  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:42.932560  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:42.962618  528268 cri.go:89] found id: ""
	I1206 10:38:42.962633  528268 logs.go:282] 0 containers: []
	W1206 10:38:42.962641  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:42.962647  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:42.962704  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:42.989497  528268 cri.go:89] found id: ""
	I1206 10:38:42.989511  528268 logs.go:282] 0 containers: []
	W1206 10:38:42.989519  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:42.989525  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:42.989581  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:43.016798  528268 cri.go:89] found id: ""
	I1206 10:38:43.016818  528268 logs.go:282] 0 containers: []
	W1206 10:38:43.016825  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:43.016831  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:43.017042  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:43.044571  528268 cri.go:89] found id: ""
	I1206 10:38:43.044589  528268 logs.go:282] 0 containers: []
	W1206 10:38:43.044599  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:43.044606  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:43.044679  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:43.072240  528268 cri.go:89] found id: ""
	I1206 10:38:43.072256  528268 logs.go:282] 0 containers: []
	W1206 10:38:43.072264  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:43.072269  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:43.072330  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:43.098196  528268 cri.go:89] found id: ""
	I1206 10:38:43.098211  528268 logs.go:282] 0 containers: []
	W1206 10:38:43.098218  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:43.098225  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:43.098237  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:43.113559  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:43.113577  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:43.177585  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:43.169460   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.169877   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.171569   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.172135   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.173643   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:43.169460   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.169877   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.171569   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.172135   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.173643   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:43.177595  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:43.177606  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:43.251189  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:43.251210  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:43.278658  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:43.278673  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:45.849509  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:45.861204  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:45.861266  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:45.888209  528268 cri.go:89] found id: ""
	I1206 10:38:45.888228  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.888236  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:45.888241  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:45.888306  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:45.913344  528268 cri.go:89] found id: ""
	I1206 10:38:45.913357  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.913365  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:45.913370  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:45.913429  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:45.939830  528268 cri.go:89] found id: ""
	I1206 10:38:45.939844  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.939852  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:45.939857  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:45.939927  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:45.964893  528268 cri.go:89] found id: ""
	I1206 10:38:45.964907  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.964914  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:45.964920  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:45.964984  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:45.991528  528268 cri.go:89] found id: ""
	I1206 10:38:45.991540  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.991548  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:45.991553  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:45.991614  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:46.018162  528268 cri.go:89] found id: ""
	I1206 10:38:46.018176  528268 logs.go:282] 0 containers: []
	W1206 10:38:46.018184  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:46.018190  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:46.018249  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:46.045784  528268 cri.go:89] found id: ""
	I1206 10:38:46.045807  528268 logs.go:282] 0 containers: []
	W1206 10:38:46.045814  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:46.045822  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:46.045833  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:46.114786  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:46.105174   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.106040   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.107658   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.108307   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.110017   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:46.105174   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.106040   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.107658   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.108307   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.110017   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:46.114796  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:46.114808  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:46.185171  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:46.185193  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:46.213442  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:46.213458  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:46.280354  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:46.280374  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:48.796511  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:48.807012  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:48.807073  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:48.832313  528268 cri.go:89] found id: ""
	I1206 10:38:48.832337  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.832344  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:48.832349  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:48.832420  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:48.857914  528268 cri.go:89] found id: ""
	I1206 10:38:48.857928  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.857935  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:48.857940  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:48.858000  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:48.887721  528268 cri.go:89] found id: ""
	I1206 10:38:48.887735  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.887743  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:48.887748  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:48.887808  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:48.912329  528268 cri.go:89] found id: ""
	I1206 10:38:48.912343  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.912351  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:48.912356  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:48.912416  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:48.942323  528268 cri.go:89] found id: ""
	I1206 10:38:48.942337  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.942344  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:48.942349  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:48.942408  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:48.971776  528268 cri.go:89] found id: ""
	I1206 10:38:48.971790  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.971798  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:48.971803  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:48.971861  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:48.997054  528268 cri.go:89] found id: ""
	I1206 10:38:48.997068  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.997076  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:48.997084  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:48.997095  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:49.071387  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:49.071413  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:49.099724  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:49.099743  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:49.165471  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:49.165492  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:49.180707  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:49.180755  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:49.246459  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:49.238180   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.239038   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.240759   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.241079   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.242605   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:49.238180   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.239038   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.240759   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.241079   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.242605   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:51.747477  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:51.757424  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:51.757483  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:51.785368  528268 cri.go:89] found id: ""
	I1206 10:38:51.785382  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.785390  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:51.785395  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:51.785452  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:51.814468  528268 cri.go:89] found id: ""
	I1206 10:38:51.814482  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.814489  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:51.814494  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:51.814553  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:51.839897  528268 cri.go:89] found id: ""
	I1206 10:38:51.839911  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.839918  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:51.839923  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:51.839980  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:51.865924  528268 cri.go:89] found id: ""
	I1206 10:38:51.865938  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.865951  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:51.865956  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:51.866011  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:51.891688  528268 cri.go:89] found id: ""
	I1206 10:38:51.891702  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.891709  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:51.891714  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:51.891772  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:51.917048  528268 cri.go:89] found id: ""
	I1206 10:38:51.917062  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.917070  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:51.917075  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:51.917132  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:51.942873  528268 cri.go:89] found id: ""
	I1206 10:38:51.942888  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.942895  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:51.942903  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:51.942914  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:52.011199  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:52.001318   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.002485   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.003254   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.005112   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.005720   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:52.001318   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.002485   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.003254   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.005112   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.005720   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:52.011209  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:52.011220  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:52.085464  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:52.085485  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:52.119213  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:52.119230  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:52.189731  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:52.189751  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:54.705436  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:54.717135  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:54.717196  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:54.755081  528268 cri.go:89] found id: ""
	I1206 10:38:54.755095  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.755105  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:54.755110  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:54.755209  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:54.780971  528268 cri.go:89] found id: ""
	I1206 10:38:54.780985  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.780993  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:54.780998  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:54.781060  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:54.806877  528268 cri.go:89] found id: ""
	I1206 10:38:54.806891  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.806898  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:54.806904  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:54.806967  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:54.832627  528268 cri.go:89] found id: ""
	I1206 10:38:54.832641  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.832649  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:54.832654  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:54.832711  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:54.857814  528268 cri.go:89] found id: ""
	I1206 10:38:54.857828  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.857836  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:54.857841  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:54.857897  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:54.883738  528268 cri.go:89] found id: ""
	I1206 10:38:54.883752  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.883759  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:54.883764  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:54.883821  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:54.909479  528268 cri.go:89] found id: ""
	I1206 10:38:54.909493  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.909500  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:54.909508  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:54.909519  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:54.975629  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:54.975651  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:54.991150  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:54.991166  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:55.064619  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:55.054168   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.054825   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.058121   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.058810   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.060748   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:55.054168   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.054825   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.058121   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.058810   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.060748   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:55.064628  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:55.064639  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:55.134387  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:55.134406  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:57.664428  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:57.675264  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:57.675328  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:57.709021  528268 cri.go:89] found id: ""
	I1206 10:38:57.709035  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.709043  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:57.709048  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:57.709116  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:57.744132  528268 cri.go:89] found id: ""
	I1206 10:38:57.744146  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.744153  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:57.744159  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:57.744226  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:57.778746  528268 cri.go:89] found id: ""
	I1206 10:38:57.778760  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.778767  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:57.778772  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:57.778829  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:57.805263  528268 cri.go:89] found id: ""
	I1206 10:38:57.805276  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.805284  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:57.805289  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:57.805348  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:57.831152  528268 cri.go:89] found id: ""
	I1206 10:38:57.831166  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.831173  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:57.831178  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:57.831240  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:57.857097  528268 cri.go:89] found id: ""
	I1206 10:38:57.857111  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.857119  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:57.857124  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:57.857189  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:57.882945  528268 cri.go:89] found id: ""
	I1206 10:38:57.882984  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.882992  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:57.883000  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:57.883011  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:57.915176  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:57.915193  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:57.981939  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:57.981958  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:57.997358  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:57.997373  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:58.070527  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:58.061092   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.061631   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.063614   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.064325   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.065286   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:58.061092   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.061631   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.063614   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.064325   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.065286   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:58.070538  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:58.070549  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:00.641789  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:00.651800  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:00.651859  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:00.679593  528268 cri.go:89] found id: ""
	I1206 10:39:00.679606  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.679613  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:00.679618  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:00.679673  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:00.712252  528268 cri.go:89] found id: ""
	I1206 10:39:00.712266  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.712273  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:00.712278  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:00.712337  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:00.746867  528268 cri.go:89] found id: ""
	I1206 10:39:00.746881  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.746888  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:00.746894  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:00.746954  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:00.779153  528268 cri.go:89] found id: ""
	I1206 10:39:00.779167  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.779174  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:00.779180  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:00.779241  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:00.805143  528268 cri.go:89] found id: ""
	I1206 10:39:00.805157  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.805164  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:00.805170  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:00.805227  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:00.831339  528268 cri.go:89] found id: ""
	I1206 10:39:00.831353  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.831361  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:00.831368  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:00.831430  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:00.857571  528268 cri.go:89] found id: ""
	I1206 10:39:00.857585  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.857593  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:00.857600  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:00.857611  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:00.925179  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:00.917222   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.917610   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.919217   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.919688   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.921308   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:00.917222   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.917610   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.919217   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.919688   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.921308   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:00.925189  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:00.925200  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:00.994191  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:00.994210  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:01.029067  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:01.029085  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:01.100689  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:01.100709  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:03.616374  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:03.626603  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:03.626714  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:03.651732  528268 cri.go:89] found id: ""
	I1206 10:39:03.651746  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.651753  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:03.651758  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:03.651818  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:03.679359  528268 cri.go:89] found id: ""
	I1206 10:39:03.679373  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.679380  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:03.679385  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:03.679442  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:03.714610  528268 cri.go:89] found id: ""
	I1206 10:39:03.714624  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.714631  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:03.714636  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:03.714693  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:03.745765  528268 cri.go:89] found id: ""
	I1206 10:39:03.745780  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.745787  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:03.745792  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:03.745849  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:03.771225  528268 cri.go:89] found id: ""
	I1206 10:39:03.771239  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.771247  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:03.771252  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:03.771316  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:03.796796  528268 cri.go:89] found id: ""
	I1206 10:39:03.796853  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.796861  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:03.796867  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:03.796925  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:03.822839  528268 cri.go:89] found id: ""
	I1206 10:39:03.822853  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.822861  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:03.822878  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:03.822888  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:03.858844  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:03.858860  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:03.925683  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:03.925703  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:03.941280  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:03.941297  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:04.009034  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:03.997692   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:03.998374   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.001181   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.001673   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.003993   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:03.997692   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:03.998374   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.001181   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.001673   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.003993   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:04.009044  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:04.009055  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:06.582354  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:06.592267  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:06.592340  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:06.617889  528268 cri.go:89] found id: ""
	I1206 10:39:06.617902  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.617909  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:06.617915  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:06.617979  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:06.643951  528268 cri.go:89] found id: ""
	I1206 10:39:06.643966  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.643973  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:06.643978  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:06.644035  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:06.669753  528268 cri.go:89] found id: ""
	I1206 10:39:06.669767  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.669774  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:06.669779  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:06.669839  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:06.701353  528268 cri.go:89] found id: ""
	I1206 10:39:06.701373  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.701380  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:06.701386  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:06.701445  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:06.751930  528268 cri.go:89] found id: ""
	I1206 10:39:06.751944  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.751952  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:06.751956  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:06.752019  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:06.778713  528268 cri.go:89] found id: ""
	I1206 10:39:06.778727  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.778734  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:06.778741  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:06.778802  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:06.804251  528268 cri.go:89] found id: ""
	I1206 10:39:06.804265  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.804273  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:06.804280  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:06.804290  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:06.871350  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:06.871368  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:06.885942  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:06.885960  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:06.959058  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:06.950158   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.951219   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.951835   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.953474   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.954070   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:06.950158   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.951219   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.951835   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.953474   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.954070   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:06.959068  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:06.959081  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:07.030114  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:07.030135  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:09.559397  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:09.569971  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:09.570039  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:09.595039  528268 cri.go:89] found id: ""
	I1206 10:39:09.595052  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.595059  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:09.595065  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:09.595152  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:09.621113  528268 cri.go:89] found id: ""
	I1206 10:39:09.621127  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.621135  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:09.621140  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:09.621203  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:09.651003  528268 cri.go:89] found id: ""
	I1206 10:39:09.651016  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.651024  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:09.651029  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:09.651087  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:09.677104  528268 cri.go:89] found id: ""
	I1206 10:39:09.677118  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.677125  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:09.677131  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:09.677187  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:09.713565  528268 cri.go:89] found id: ""
	I1206 10:39:09.713579  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.713587  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:09.713592  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:09.713653  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:09.741915  528268 cri.go:89] found id: ""
	I1206 10:39:09.741928  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.741935  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:09.741941  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:09.741997  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:09.774013  528268 cri.go:89] found id: ""
	I1206 10:39:09.774027  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.774035  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:09.774042  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:09.774054  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:09.840091  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:09.840113  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:09.855657  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:09.855675  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:09.919867  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:09.911210   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.911783   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.913473   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.914124   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.915891   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:09.911210   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.911783   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.913473   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.914124   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.915891   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:09.919877  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:09.919901  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:09.991592  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:09.991613  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:12.526559  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:12.537148  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:12.537208  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:12.570214  528268 cri.go:89] found id: ""
	I1206 10:39:12.570228  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.570235  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:12.570241  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:12.570299  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:12.595309  528268 cri.go:89] found id: ""
	I1206 10:39:12.595324  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.595331  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:12.595342  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:12.595401  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:12.620408  528268 cri.go:89] found id: ""
	I1206 10:39:12.620422  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.620429  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:12.620434  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:12.620495  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:12.645606  528268 cri.go:89] found id: ""
	I1206 10:39:12.645621  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.645628  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:12.645644  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:12.645700  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:12.672105  528268 cri.go:89] found id: ""
	I1206 10:39:12.672119  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.672126  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:12.672132  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:12.672191  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:12.699949  528268 cri.go:89] found id: ""
	I1206 10:39:12.699964  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.699971  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:12.699976  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:12.700038  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:12.730867  528268 cri.go:89] found id: ""
	I1206 10:39:12.730881  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.730888  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:12.730896  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:12.730907  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:12.760666  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:12.760682  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:12.827918  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:12.827939  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:12.845229  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:12.845250  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:12.913571  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:12.905225   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.906413   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.907377   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.908192   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.909739   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:12.905225   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.906413   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.907377   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.908192   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.909739   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:12.913582  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:12.913606  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:15.486285  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:15.496339  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:15.496397  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:15.522751  528268 cri.go:89] found id: ""
	I1206 10:39:15.522765  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.522773  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:15.522782  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:15.522842  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:15.548733  528268 cri.go:89] found id: ""
	I1206 10:39:15.548747  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.548760  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:15.548765  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:15.548823  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:15.574392  528268 cri.go:89] found id: ""
	I1206 10:39:15.574406  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.574413  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:15.574418  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:15.574475  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:15.600281  528268 cri.go:89] found id: ""
	I1206 10:39:15.600297  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.600311  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:15.600316  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:15.600376  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:15.626469  528268 cri.go:89] found id: ""
	I1206 10:39:15.626482  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.626490  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:15.626496  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:15.626561  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:15.652394  528268 cri.go:89] found id: ""
	I1206 10:39:15.652407  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.652414  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:15.652420  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:15.652477  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:15.679527  528268 cri.go:89] found id: ""
	I1206 10:39:15.679540  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.679553  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:15.679561  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:15.679571  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:15.764342  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:15.764363  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:15.798376  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:15.798394  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:15.868665  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:15.868685  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:15.883983  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:15.883999  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:15.952342  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:15.944348   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.945157   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.946732   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.947077   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.948583   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:15.944348   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.945157   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.946732   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.947077   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.948583   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:18.453493  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:18.463876  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:18.463935  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:18.490209  528268 cri.go:89] found id: ""
	I1206 10:39:18.490224  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.490231  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:18.490236  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:18.490294  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:18.516967  528268 cri.go:89] found id: ""
	I1206 10:39:18.516981  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.516988  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:18.516993  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:18.517054  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:18.546169  528268 cri.go:89] found id: ""
	I1206 10:39:18.546182  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.546189  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:18.546194  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:18.546253  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:18.571307  528268 cri.go:89] found id: ""
	I1206 10:39:18.571320  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.571327  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:18.571333  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:18.571391  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:18.596842  528268 cri.go:89] found id: ""
	I1206 10:39:18.596856  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.596863  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:18.596868  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:18.596924  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:18.622545  528268 cri.go:89] found id: ""
	I1206 10:39:18.622559  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.622566  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:18.622571  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:18.622628  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:18.647866  528268 cri.go:89] found id: ""
	I1206 10:39:18.647879  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.647886  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:18.647894  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:18.647904  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:18.722841  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:18.722867  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:18.738489  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:18.738506  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:18.804503  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:18.796653   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.797155   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.798686   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.799110   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.800626   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:18.796653   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.797155   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.798686   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.799110   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.800626   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:18.804514  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:18.804527  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:18.873502  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:18.873520  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:21.404064  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:21.414555  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:21.414615  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:21.439357  528268 cri.go:89] found id: ""
	I1206 10:39:21.439371  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.439378  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:21.439384  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:21.439444  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:21.464257  528268 cri.go:89] found id: ""
	I1206 10:39:21.464270  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.464278  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:21.464283  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:21.464342  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:21.489051  528268 cri.go:89] found id: ""
	I1206 10:39:21.489065  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.489072  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:21.489077  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:21.489133  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:21.514898  528268 cri.go:89] found id: ""
	I1206 10:39:21.514912  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.514919  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:21.514930  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:21.514988  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:21.540268  528268 cri.go:89] found id: ""
	I1206 10:39:21.540283  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.540290  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:21.540296  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:21.540361  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:21.564943  528268 cri.go:89] found id: ""
	I1206 10:39:21.564957  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.564965  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:21.564970  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:21.565031  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:21.590819  528268 cri.go:89] found id: ""
	I1206 10:39:21.590833  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.590840  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:21.590848  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:21.590858  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:21.656247  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:21.647267   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.648092   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.649642   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.650214   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.652120   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:21.647267   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.648092   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.649642   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.650214   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.652120   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:21.656258  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:21.656268  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:21.726649  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:21.726669  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:21.757883  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:21.757900  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:21.827592  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:21.827612  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:24.344952  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:24.355567  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:24.355629  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:24.381792  528268 cri.go:89] found id: ""
	I1206 10:39:24.381806  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.381814  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:24.381819  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:24.381880  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:24.406752  528268 cri.go:89] found id: ""
	I1206 10:39:24.406766  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.406773  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:24.406779  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:24.406837  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:24.435444  528268 cri.go:89] found id: ""
	I1206 10:39:24.435458  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.435466  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:24.435471  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:24.435537  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:24.460261  528268 cri.go:89] found id: ""
	I1206 10:39:24.460275  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.460282  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:24.460287  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:24.460344  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:24.485676  528268 cri.go:89] found id: ""
	I1206 10:39:24.485689  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.485697  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:24.485702  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:24.485758  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:24.515674  528268 cri.go:89] found id: ""
	I1206 10:39:24.515689  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.515696  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:24.515702  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:24.515759  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:24.540533  528268 cri.go:89] found id: ""
	I1206 10:39:24.540547  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.540555  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:24.540563  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:24.540573  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:24.607514  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:24.607536  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:24.622495  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:24.622512  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:24.688734  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:24.679787   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.680616   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.681733   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.682450   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.684164   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:24.679787   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.680616   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.681733   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.682450   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.684164   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:24.688745  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:24.688755  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:24.767851  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:24.767871  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:27.298384  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:27.308520  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:27.308577  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:27.337406  528268 cri.go:89] found id: ""
	I1206 10:39:27.337421  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.337429  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:27.337434  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:27.337492  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:27.363616  528268 cri.go:89] found id: ""
	I1206 10:39:27.363630  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.363637  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:27.363643  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:27.363700  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:27.387807  528268 cri.go:89] found id: ""
	I1206 10:39:27.387821  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.387828  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:27.387833  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:27.387892  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:27.417047  528268 cri.go:89] found id: ""
	I1206 10:39:27.417061  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.417068  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:27.417076  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:27.417135  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:27.443034  528268 cri.go:89] found id: ""
	I1206 10:39:27.443047  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.443055  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:27.443060  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:27.443156  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:27.469276  528268 cri.go:89] found id: ""
	I1206 10:39:27.469289  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.469297  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:27.469302  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:27.469361  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:27.494605  528268 cri.go:89] found id: ""
	I1206 10:39:27.494619  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.494626  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:27.494634  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:27.494681  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:27.522899  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:27.522916  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:27.593447  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:27.593467  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:27.608920  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:27.608937  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:27.673774  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:27.665376   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.666067   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.667656   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.668260   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.669814   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:27.665376   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.666067   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.667656   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.668260   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.669814   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:27.673784  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:27.673795  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:30.246836  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:30.257118  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:30.257181  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:30.285905  528268 cri.go:89] found id: ""
	I1206 10:39:30.285918  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.285926  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:30.285931  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:30.285991  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:30.312233  528268 cri.go:89] found id: ""
	I1206 10:39:30.312247  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.312254  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:30.312259  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:30.312320  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:30.342032  528268 cri.go:89] found id: ""
	I1206 10:39:30.342047  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.342061  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:30.342066  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:30.342127  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:30.371021  528268 cri.go:89] found id: ""
	I1206 10:39:30.371051  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.371059  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:30.371064  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:30.371145  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:30.397540  528268 cri.go:89] found id: ""
	I1206 10:39:30.397554  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.397561  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:30.397566  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:30.397625  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:30.424004  528268 cri.go:89] found id: ""
	I1206 10:39:30.424018  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.424026  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:30.424033  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:30.424090  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:30.450313  528268 cri.go:89] found id: ""
	I1206 10:39:30.450327  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.450335  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:30.450342  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:30.450352  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:30.516474  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:30.516493  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:30.532143  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:30.532160  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:30.595585  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:30.587952   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.588400   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.589883   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.590195   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.591620   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:30.587952   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.588400   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.589883   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.590195   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.591620   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:30.595595  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:30.595606  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:30.664167  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:30.664186  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:33.200924  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:33.211672  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:33.211735  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:33.237137  528268 cri.go:89] found id: ""
	I1206 10:39:33.237151  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.237159  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:33.237165  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:33.237265  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:33.263318  528268 cri.go:89] found id: ""
	I1206 10:39:33.263332  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.263339  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:33.263345  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:33.263403  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:33.292810  528268 cri.go:89] found id: ""
	I1206 10:39:33.292824  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.292832  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:33.292837  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:33.292902  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:33.322280  528268 cri.go:89] found id: ""
	I1206 10:39:33.322294  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.322302  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:33.322307  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:33.322371  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:33.347371  528268 cri.go:89] found id: ""
	I1206 10:39:33.347384  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.347391  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:33.347397  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:33.347454  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:33.373452  528268 cri.go:89] found id: ""
	I1206 10:39:33.373465  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.373473  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:33.373478  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:33.373536  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:33.398875  528268 cri.go:89] found id: ""
	I1206 10:39:33.398895  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.398902  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:33.398910  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:33.398921  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:33.465783  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:33.465803  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:33.480960  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:33.480977  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:33.548139  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:33.539389   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.540163   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.541972   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.542561   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.544286   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:33.539389   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.540163   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.541972   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.542561   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.544286   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:33.548148  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:33.548158  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:33.617390  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:33.617412  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:36.152703  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:36.162988  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:36.163052  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:36.188586  528268 cri.go:89] found id: ""
	I1206 10:39:36.188599  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.188607  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:36.188611  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:36.188670  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:36.213361  528268 cri.go:89] found id: ""
	I1206 10:39:36.213374  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.213383  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:36.213388  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:36.213445  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:36.239271  528268 cri.go:89] found id: ""
	I1206 10:39:36.239285  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.239292  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:36.239297  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:36.239357  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:36.265679  528268 cri.go:89] found id: ""
	I1206 10:39:36.265695  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.265702  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:36.265707  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:36.265766  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:36.295654  528268 cri.go:89] found id: ""
	I1206 10:39:36.295668  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.295675  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:36.295681  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:36.295739  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:36.323853  528268 cri.go:89] found id: ""
	I1206 10:39:36.323874  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.323881  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:36.323887  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:36.323950  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:36.348624  528268 cri.go:89] found id: ""
	I1206 10:39:36.348639  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.348646  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:36.348654  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:36.348665  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:36.363245  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:36.363261  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:36.427550  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:36.419105   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.419825   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.421548   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.422073   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.423577   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:36.419105   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.419825   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.421548   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.422073   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.423577   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:36.427562  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:36.427573  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:36.495925  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:36.495943  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:36.524935  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:36.524952  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:39.092735  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:39.102812  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:39.102870  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:39.129292  528268 cri.go:89] found id: ""
	I1206 10:39:39.129306  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.129313  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:39.129318  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:39.129374  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:39.158470  528268 cri.go:89] found id: ""
	I1206 10:39:39.158484  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.158491  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:39.158496  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:39.158555  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:39.184281  528268 cri.go:89] found id: ""
	I1206 10:39:39.184295  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.184303  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:39.184308  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:39.184371  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:39.213800  528268 cri.go:89] found id: ""
	I1206 10:39:39.213813  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.213820  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:39.213825  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:39.213879  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:39.239313  528268 cri.go:89] found id: ""
	I1206 10:39:39.239327  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.239334  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:39.239339  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:39.239399  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:39.266416  528268 cri.go:89] found id: ""
	I1206 10:39:39.266429  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.266436  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:39.266442  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:39.266497  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:39.291512  528268 cri.go:89] found id: ""
	I1206 10:39:39.291526  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.291533  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:39.291541  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:39.291552  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:39.357396  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:39.357414  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:39.372532  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:39.372549  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:39.435924  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:39.427398   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.428323   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.429997   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.430495   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.432094   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:39.427398   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.428323   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.429997   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.430495   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.432094   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:39.435935  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:39.435946  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:39.504162  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:39.504182  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:42.034738  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:42.045722  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:42.045786  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:42.075972  528268 cri.go:89] found id: ""
	I1206 10:39:42.075988  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.075998  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:42.076004  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:42.076071  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:42.111989  528268 cri.go:89] found id: ""
	I1206 10:39:42.112018  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.112042  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:42.112048  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:42.112124  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:42.147538  528268 cri.go:89] found id: ""
	I1206 10:39:42.147562  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.147571  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:42.147577  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:42.147654  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:42.177982  528268 cri.go:89] found id: ""
	I1206 10:39:42.177999  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.178009  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:42.178016  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:42.178090  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:42.209844  528268 cri.go:89] found id: ""
	I1206 10:39:42.209860  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.209868  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:42.209874  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:42.209966  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:42.266057  528268 cri.go:89] found id: ""
	I1206 10:39:42.266071  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.266079  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:42.266085  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:42.266153  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:42.298140  528268 cri.go:89] found id: ""
	I1206 10:39:42.298154  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.298162  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:42.298184  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:42.298197  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:42.330034  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:42.330051  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:42.396938  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:42.396958  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:42.412056  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:42.412077  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:42.481304  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:42.470939   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.471731   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.473286   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.475758   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.476402   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:42.470939   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.471731   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.473286   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.475758   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.476402   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:42.481314  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:42.481326  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:45.054765  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:45.080943  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:45.081023  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:45.141872  528268 cri.go:89] found id: ""
	I1206 10:39:45.141889  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.141898  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:45.141904  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:45.141970  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:45.187818  528268 cri.go:89] found id: ""
	I1206 10:39:45.187838  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.187846  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:45.187854  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:45.187928  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:45.231785  528268 cri.go:89] found id: ""
	I1206 10:39:45.231815  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.231846  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:45.231853  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:45.232001  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:45.271976  528268 cri.go:89] found id: ""
	I1206 10:39:45.272000  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.272007  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:45.272020  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:45.272144  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:45.309755  528268 cri.go:89] found id: ""
	I1206 10:39:45.309770  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.309778  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:45.309784  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:45.309859  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:45.337077  528268 cri.go:89] found id: ""
	I1206 10:39:45.337091  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.337098  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:45.337104  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:45.337161  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:45.363255  528268 cri.go:89] found id: ""
	I1206 10:39:45.363269  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.363277  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:45.363285  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:45.363295  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:45.430326  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:45.430345  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:45.445222  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:45.445239  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:45.514305  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:45.503694   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.504527   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.507399   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.508008   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.509816   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:45.503694   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.504527   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.507399   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.508008   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.509816   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:45.514315  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:45.514351  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:45.586673  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:45.586702  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:48.117880  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:48.128191  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:48.128261  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:48.153898  528268 cri.go:89] found id: ""
	I1206 10:39:48.153912  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.153919  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:48.153924  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:48.153986  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:48.179947  528268 cri.go:89] found id: ""
	I1206 10:39:48.179960  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.179968  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:48.179973  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:48.180032  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:48.206970  528268 cri.go:89] found id: ""
	I1206 10:39:48.206984  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.206992  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:48.206997  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:48.207056  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:48.232490  528268 cri.go:89] found id: ""
	I1206 10:39:48.232504  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.232511  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:48.232516  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:48.232574  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:48.261888  528268 cri.go:89] found id: ""
	I1206 10:39:48.261902  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.261909  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:48.261915  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:48.261970  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:48.287239  528268 cri.go:89] found id: ""
	I1206 10:39:48.287259  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.287266  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:48.287271  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:48.287327  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:48.312701  528268 cri.go:89] found id: ""
	I1206 10:39:48.312716  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.312723  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:48.312730  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:48.312741  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:48.379854  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:48.379873  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:48.395027  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:48.395043  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:48.467966  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:48.459014   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.459732   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.460649   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.462199   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.462576   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:48.459014   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.459732   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.460649   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.462199   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.462576   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:48.467977  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:48.467999  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:48.537326  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:48.537347  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:51.077353  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:51.088357  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:51.088422  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:51.113964  528268 cri.go:89] found id: ""
	I1206 10:39:51.113978  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.113986  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:51.113991  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:51.114048  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:51.141966  528268 cri.go:89] found id: ""
	I1206 10:39:51.141981  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.141989  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:51.141994  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:51.142065  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:51.170585  528268 cri.go:89] found id: ""
	I1206 10:39:51.170599  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.170607  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:51.170612  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:51.170670  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:51.196958  528268 cri.go:89] found id: ""
	I1206 10:39:51.196972  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.196980  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:51.196985  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:51.197045  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:51.222240  528268 cri.go:89] found id: ""
	I1206 10:39:51.222255  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.222262  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:51.222267  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:51.222328  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:51.248023  528268 cri.go:89] found id: ""
	I1206 10:39:51.248038  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.248045  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:51.248051  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:51.248110  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:51.276094  528268 cri.go:89] found id: ""
	I1206 10:39:51.276108  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.276115  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:51.276122  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:51.276132  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:51.342420  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:51.342443  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:51.357018  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:51.357034  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:51.423986  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:51.415814   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.416564   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.418096   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.418402   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.419900   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:51.415814   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.416564   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.418096   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.418402   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.419900   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:51.423996  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:51.424007  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:51.493620  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:51.493640  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:54.023829  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:54.034889  528268 kubeadm.go:602] duration metric: took 4m2.326619845s to restartPrimaryControlPlane
	W1206 10:39:54.034955  528268 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 10:39:54.035078  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:39:54.453084  528268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:39:54.466906  528268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:39:54.474624  528268 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:39:54.474678  528268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:39:54.482552  528268 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:39:54.482562  528268 kubeadm.go:158] found existing configuration files:
	
	I1206 10:39:54.482612  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:39:54.490238  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:39:54.490301  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:39:54.497760  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:39:54.505776  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:39:54.505840  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:39:54.513397  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:39:54.521456  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:39:54.521517  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:39:54.529274  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:39:54.537105  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:39:54.537161  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:39:54.544719  528268 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:39:54.584997  528268 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:39:54.585045  528268 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:39:54.652750  528268 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:39:54.652815  528268 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:39:54.652850  528268 kubeadm.go:319] OS: Linux
	I1206 10:39:54.652893  528268 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:39:54.652940  528268 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:39:54.652986  528268 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:39:54.653033  528268 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:39:54.653079  528268 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:39:54.653126  528268 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:39:54.653171  528268 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:39:54.653217  528268 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:39:54.653262  528268 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:39:54.728791  528268 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:39:54.728901  528268 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:39:54.729018  528268 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:39:54.737647  528268 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:39:54.741159  528268 out.go:252]   - Generating certificates and keys ...
	I1206 10:39:54.741265  528268 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:39:54.741337  528268 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:39:54.741433  528268 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:39:54.741505  528268 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:39:54.741585  528268 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:39:54.741651  528268 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:39:54.741743  528268 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:39:54.741813  528268 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:39:54.741895  528268 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:39:54.741991  528268 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:39:54.742045  528268 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:39:54.742113  528268 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:39:55.375743  528268 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:39:55.444664  528268 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:39:55.561708  528268 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:39:55.802678  528268 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:39:55.992428  528268 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:39:55.993134  528268 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:39:55.995941  528268 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:39:55.999335  528268 out.go:252]   - Booting up control plane ...
	I1206 10:39:55.999434  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:39:55.999507  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:39:55.999569  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:39:56.016567  528268 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:39:56.016688  528268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:39:56.025029  528268 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:39:56.025345  528268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:39:56.025411  528268 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:39:56.167783  528268 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:39:56.167896  528268 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:43:56.165890  528268 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000163749s
	I1206 10:43:56.165916  528268 kubeadm.go:319] 
	I1206 10:43:56.165973  528268 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:43:56.166007  528268 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:43:56.166124  528268 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:43:56.166130  528268 kubeadm.go:319] 
	I1206 10:43:56.166237  528268 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:43:56.166298  528268 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:43:56.166345  528268 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:43:56.166349  528268 kubeadm.go:319] 
	I1206 10:43:56.171451  528268 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:43:56.171899  528268 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:43:56.172014  528268 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:43:56.172288  528268 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1206 10:43:56.172293  528268 kubeadm.go:319] 
	I1206 10:43:56.172374  528268 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:43:56.172501  528268 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000163749s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:43:56.172597  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:43:56.619462  528268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:43:56.633229  528268 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:43:56.633287  528268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:43:56.641609  528268 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:43:56.641619  528268 kubeadm.go:158] found existing configuration files:
	
	I1206 10:43:56.641669  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:43:56.649494  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:43:56.649548  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:43:56.657009  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:43:56.665153  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:43:56.665204  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:43:56.672965  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:43:56.681003  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:43:56.681063  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:43:56.688721  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:43:56.696901  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:43:56.696963  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:43:56.704620  528268 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:43:56.745749  528268 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:43:56.745826  528268 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:43:56.814552  528268 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:43:56.814625  528268 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:43:56.814668  528268 kubeadm.go:319] OS: Linux
	I1206 10:43:56.814710  528268 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:43:56.814764  528268 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:43:56.814817  528268 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:43:56.814861  528268 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:43:56.814913  528268 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:43:56.814977  528268 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:43:56.815030  528268 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:43:56.815078  528268 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:43:56.815150  528268 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:43:56.882919  528268 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:43:56.883028  528268 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:43:56.883177  528268 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:43:56.891776  528268 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:43:56.897133  528268 out.go:252]   - Generating certificates and keys ...
	I1206 10:43:56.897243  528268 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:43:56.897331  528268 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:43:56.897418  528268 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:43:56.897483  528268 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:43:56.897556  528268 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:43:56.897613  528268 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:43:56.897679  528268 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:43:56.897743  528268 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:43:56.897822  528268 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:43:56.897898  528268 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:43:56.897938  528268 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:43:56.897997  528268 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:43:57.103756  528268 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:43:57.598666  528268 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:43:58.161834  528268 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:43:58.402161  528268 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:43:58.630471  528268 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:43:58.631113  528268 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:43:58.634023  528268 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:43:58.637198  528268 out.go:252]   - Booting up control plane ...
	I1206 10:43:58.637294  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:43:58.637640  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:43:58.639086  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:43:58.654264  528268 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:43:58.654366  528268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:43:58.662722  528268 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:43:58.663439  528268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:43:58.663774  528268 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:43:58.799365  528268 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:43:58.799473  528268 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:47:58.799403  528268 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000249913s
	I1206 10:47:58.799433  528268 kubeadm.go:319] 
	I1206 10:47:58.799491  528268 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:47:58.799521  528268 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:47:58.799619  528268 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:47:58.799623  528268 kubeadm.go:319] 
	I1206 10:47:58.799720  528268 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:47:58.799749  528268 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:47:58.799777  528268 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:47:58.799780  528268 kubeadm.go:319] 
	I1206 10:47:58.803822  528268 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:47:58.804249  528268 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:47:58.804357  528268 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:47:58.804590  528268 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:47:58.804595  528268 kubeadm.go:319] 
	I1206 10:47:58.804663  528268 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:47:58.804715  528268 kubeadm.go:403] duration metric: took 12m7.139257328s to StartCluster
	I1206 10:47:58.804746  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:47:58.804808  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:47:58.833842  528268 cri.go:89] found id: ""
	I1206 10:47:58.833855  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.833863  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:47:58.833869  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:47:58.833925  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:47:58.859642  528268 cri.go:89] found id: ""
	I1206 10:47:58.859656  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.859663  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:47:58.859668  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:47:58.859731  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:47:58.888835  528268 cri.go:89] found id: ""
	I1206 10:47:58.888850  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.888857  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:47:58.888863  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:47:58.888920  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:47:58.913692  528268 cri.go:89] found id: ""
	I1206 10:47:58.913706  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.913713  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:47:58.913718  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:47:58.913775  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:47:58.941639  528268 cri.go:89] found id: ""
	I1206 10:47:58.941653  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.941660  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:47:58.941671  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:47:58.941728  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:47:58.968219  528268 cri.go:89] found id: ""
	I1206 10:47:58.968240  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.968249  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:47:58.968254  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:47:58.968312  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:47:58.993376  528268 cri.go:89] found id: ""
	I1206 10:47:58.993390  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.993397  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:47:58.993405  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:47:58.993415  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:47:59.059491  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:47:59.059510  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:47:59.075692  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:47:59.075708  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:47:59.140902  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:47:59.133228   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.133791   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.135323   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.135733   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.137154   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:47:59.133228   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.133791   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.135323   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.135733   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.137154   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:47:59.140911  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:47:59.140922  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:47:59.218521  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:47:59.218539  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:47:59.255468  528268 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000249913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:47:59.255514  528268 out.go:285] * 
	W1206 10:47:59.255766  528268 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000249913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:47:59.255841  528268 out.go:285] * 
	W1206 10:47:59.258456  528268 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:47:59.265427  528268 out.go:203] 
	W1206 10:47:59.268413  528268 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000249913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:47:59.268473  528268 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:47:59.268491  528268 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:47:59.271584  528268 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:35:50 functional-123579 crio[9949]: time="2025-12-06T10:35:50.040211726Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 06 10:35:50 functional-123579 crio[9949]: time="2025-12-06T10:35:50.040248033Z" level=info msg="Starting seccomp notifier watcher"
	Dec 06 10:35:50 functional-123579 crio[9949]: time="2025-12-06T10:35:50.040298977Z" level=info msg="Create NRI interface"
	Dec 06 10:35:50 functional-123579 crio[9949]: time="2025-12-06T10:35:50.040397822Z" level=info msg="built-in NRI default validator is disabled"
	Dec 06 10:35:50 functional-123579 crio[9949]: time="2025-12-06T10:35:50.04040656Z" level=info msg="runtime interface created"
	Dec 06 10:35:50 functional-123579 crio[9949]: time="2025-12-06T10:35:50.040418097Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 06 10:35:50 functional-123579 crio[9949]: time="2025-12-06T10:35:50.040424414Z" level=info msg="runtime interface starting up..."
	Dec 06 10:35:50 functional-123579 crio[9949]: time="2025-12-06T10:35:50.040430519Z" level=info msg="starting plugins..."
	Dec 06 10:35:50 functional-123579 crio[9949]: time="2025-12-06T10:35:50.040443565Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:35:50 functional-123579 crio[9949]: time="2025-12-06T10:35:50.040509278Z" level=info msg="No systemd watchdog enabled"
	Dec 06 10:35:50 functional-123579 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.732761675Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=c00b0212-e336-4d22-92e1-7d2bc5879a6e name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.733702159Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=9f684ee3-1cff-44ee-b48c-175c742cbd8a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.734357315Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=b1ddac76-5aa4-4140-b7f7-c9eed400c171 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.734837772Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=7e125323-ff3c-4e31-b0b9-3d9689de3e58 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.735631552Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=67dc2959-1f35-4122-97f6-07949ee5c60d name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.7361477Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=78397495-3170-4295-8073-cc8bd3750cff name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.736754759Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=b77669c2-3fed-4601-ace3-1a76e50882f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.886838849Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=e2aa5af4-3e0c-4a29-a9b0-9e59e8da3ea3 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.888149098Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=2232845f-2ab4-48d6-ac34-944fdebda910 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.888749905Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=c67da188-42dd-470b-ae77-cf546f5b22af name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.889342319Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=7b189f38-b046-468f-93d2-aafc2f683ea0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.889870274Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=cce0b971-d053-408a-aced-c9bdb56d4198 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.890356696Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=2133806a-9696-4cef-a9b9-9f8ae49bcb1a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.890769463Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=4197f4de-a4d5-47d7-aee8-909523db8ff4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:48:00.811261   21224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:48:00.812341   21224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:48:00.813219   21224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:48:00.814095   21224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:48:00.815758   21224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:20] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000983] FS-Cache: O-cookie d=000000005fa08aa9{9P.session} n=00000000effdd306
	[  +0.001108] FS-Cache: O-key=[10] '34323935383339353739'
	[  +0.000774] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001064] FS-Cache: N-cookie d=000000005fa08aa9{9P.session} n=00000000d1a54e80
	[  +0.001158] FS-Cache: N-key=[10] '34323935383339353739'
	[Dec 6 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.091742] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:17] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:18] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:48:00 up  3:30,  0 user,  load average: 0.42, 0.21, 0.46
	Linux functional-123579 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:47:58 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:47:59 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2128.
	Dec 06 10:47:59 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:47:59 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:47:59 functional-123579 kubelet[21105]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:47:59 functional-123579 kubelet[21105]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:47:59 functional-123579 kubelet[21105]: E1206 10:47:59.256763   21105 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:47:59 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:47:59 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:47:59 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2129.
	Dec 06 10:47:59 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:47:59 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:47:59 functional-123579 kubelet[21136]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:47:59 functional-123579 kubelet[21136]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:48:00 functional-123579 kubelet[21136]: E1206 10:48:00.006061   21136 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:48:00 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:48:00 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:48:00 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2130.
	Dec 06 10:48:00 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:48:00 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:48:00 functional-123579 kubelet[21205]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:48:00 functional-123579 kubelet[21205]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:48:00 functional-123579 kubelet[21205]: E1206 10:48:00.740836   21205 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:48:00 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:48:00 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579: exit status 2 (366.23916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-123579" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (734.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-123579 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-123579 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (61.364346ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-123579 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-123579
helpers_test.go:243: (dbg) docker inspect functional-123579:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	        "Created": "2025-12-06T10:21:05.490589445Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 516908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:21:05.573219423Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hostname",
	        "HostsPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hosts",
	        "LogPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721-json.log",
	        "Name": "/functional-123579",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-123579:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-123579",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	                "LowerDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f-init/diff:/var/lib/docker/overlay2/cc06c0f1f442a7275dc247974ca9074508813cfb842de89bc5bb1dae1e824222/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-123579",
	                "Source": "/var/lib/docker/volumes/functional-123579/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-123579",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-123579",
	                "name.minikube.sigs.k8s.io": "functional-123579",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10921d51d4ec866d78853297249318b04ef864639c8e07349985c5733ba03a26",
	            "SandboxKey": "/var/run/docker/netns/10921d51d4ec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-123579": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:5b:29:c4:a4:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa75a7cb7ddfb7086d66f629904d681a84e2c9da78725396c4dc859cfc5aa536",
	                    "EndpointID": "eff9632b5a6c335169f4a61b3c9f1727c30b30183ac61ac9730ddb7b0d19cf24",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-123579",
	                        "86e8d3865f80"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579: exit status 2 (330.005868ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-137526 image ls --format yaml --alsologtostderr                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ ssh     │ functional-137526 ssh pgrep buildkitd                                                                                                             │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ image   │ functional-137526 image build -t localhost/my-image:functional-137526 testdata/build --alsologtostderr                                            │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image   │ functional-137526 image ls --format json --alsologtostderr                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image   │ functional-137526 image ls --format table --alsologtostderr                                                                                       │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ image   │ functional-137526 image ls                                                                                                                        │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ delete  │ -p functional-137526                                                                                                                              │ functional-137526 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:21 UTC │
	│ start   │ -p functional-123579 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:21 UTC │                     │
	│ start   │ -p functional-123579 --alsologtostderr -v=8                                                                                                       │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:29 UTC │                     │
	│ cache   │ functional-123579 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ functional-123579 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ functional-123579 cache add registry.k8s.io/pause:latest                                                                                          │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ functional-123579 cache add minikube-local-cache-test:functional-123579                                                                           │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ functional-123579 cache delete minikube-local-cache-test:functional-123579                                                                        │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ ssh     │ functional-123579 ssh sudo crictl images                                                                                                          │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ ssh     │ functional-123579 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ ssh     │ functional-123579 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │                     │
	│ cache   │ functional-123579 cache reload                                                                                                                    │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ ssh     │ functional-123579 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ kubectl │ functional-123579 kubectl -- --context functional-123579 get pods                                                                                 │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │                     │
	│ start   │ -p functional-123579 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:35:46
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:35:46.955658  528268 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:35:46.955828  528268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:35:46.955833  528268 out.go:374] Setting ErrFile to fd 2...
	I1206 10:35:46.955837  528268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:35:46.956177  528268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:35:46.956655  528268 out.go:368] Setting JSON to false
	I1206 10:35:46.957664  528268 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11898,"bootTime":1765005449,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:35:46.957734  528268 start.go:143] virtualization:  
	I1206 10:35:46.961283  528268 out.go:179] * [functional-123579] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:35:46.964510  528268 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:35:46.964613  528268 notify.go:221] Checking for updates...
	I1206 10:35:46.968278  528268 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:35:46.971356  528268 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:35:46.974199  528268 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:35:46.977104  528268 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:35:46.980765  528268 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:35:46.984213  528268 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:35:46.984322  528268 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:35:47.012645  528268 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:35:47.012749  528268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:35:47.074577  528268 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:35:47.064697556 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:35:47.074671  528268 docker.go:319] overlay module found
	I1206 10:35:47.077640  528268 out.go:179] * Using the docker driver based on existing profile
	I1206 10:35:47.080521  528268 start.go:309] selected driver: docker
	I1206 10:35:47.080533  528268 start.go:927] validating driver "docker" against &{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:35:47.080637  528268 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:35:47.080758  528268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:35:47.138440  528268 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:35:47.128848609 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:35:47.138821  528268 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:35:47.138844  528268 cni.go:84] Creating CNI manager for ""
	I1206 10:35:47.138899  528268 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:35:47.138936  528268 start.go:353] cluster config:
	{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:35:47.144166  528268 out.go:179] * Starting "functional-123579" primary control-plane node in "functional-123579" cluster
	I1206 10:35:47.147068  528268 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:35:47.149949  528268 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:35:47.152780  528268 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:35:47.152816  528268 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:35:47.152824  528268 cache.go:65] Caching tarball of preloaded images
	I1206 10:35:47.152870  528268 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:35:47.152921  528268 preload.go:238] Found /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:35:47.152931  528268 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:35:47.153043  528268 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/config.json ...
	I1206 10:35:47.172511  528268 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:35:47.172523  528268 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:35:47.172545  528268 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:35:47.172580  528268 start.go:360] acquireMachinesLock for functional-123579: {Name:mk35a9adf20f50a3c49b774a4ee092917f16cc66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:35:47.172652  528268 start.go:364] duration metric: took 54.497µs to acquireMachinesLock for "functional-123579"
	I1206 10:35:47.172672  528268 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:35:47.172676  528268 fix.go:54] fixHost starting: 
	I1206 10:35:47.172937  528268 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:35:47.189604  528268 fix.go:112] recreateIfNeeded on functional-123579: state=Running err=<nil>
	W1206 10:35:47.189624  528268 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:35:47.192615  528268 out.go:252] * Updating the running docker "functional-123579" container ...
	I1206 10:35:47.192637  528268 machine.go:94] provisionDockerMachine start ...
	I1206 10:35:47.192731  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:47.209670  528268 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:47.209990  528268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:35:47.209996  528268 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:35:47.362840  528268 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:35:47.362854  528268 ubuntu.go:182] provisioning hostname "functional-123579"
	I1206 10:35:47.362918  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:47.381544  528268 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:47.381860  528268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:35:47.381868  528268 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-123579 && echo "functional-123579" | sudo tee /etc/hostname
	I1206 10:35:47.544930  528268 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:35:47.545031  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:47.563487  528268 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:47.563810  528268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:35:47.563823  528268 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-123579' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-123579/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-123579' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:35:47.717170  528268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:35:47.717187  528268 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-484819/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-484819/.minikube}
	I1206 10:35:47.717204  528268 ubuntu.go:190] setting up certificates
	I1206 10:35:47.717211  528268 provision.go:84] configureAuth start
	I1206 10:35:47.717282  528268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:35:47.741856  528268 provision.go:143] copyHostCerts
	I1206 10:35:47.741924  528268 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem, removing ...
	I1206 10:35:47.741936  528268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 10:35:47.742009  528268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem (1082 bytes)
	I1206 10:35:47.742105  528268 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem, removing ...
	I1206 10:35:47.742109  528268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 10:35:47.742132  528268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem (1123 bytes)
	I1206 10:35:47.742180  528268 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem, removing ...
	I1206 10:35:47.742184  528268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 10:35:47.742206  528268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem (1675 bytes)
	I1206 10:35:47.742252  528268 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem org=jenkins.functional-123579 san=[127.0.0.1 192.168.49.2 functional-123579 localhost minikube]
	I1206 10:35:47.924439  528268 provision.go:177] copyRemoteCerts
	I1206 10:35:47.924500  528268 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:35:47.924538  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:47.942367  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:48.047397  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:35:48.065928  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:35:48.085149  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 10:35:48.103937  528268 provision.go:87] duration metric: took 386.701009ms to configureAuth
	I1206 10:35:48.103956  528268 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:35:48.104161  528268 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:35:48.104265  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.122386  528268 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:48.122699  528268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:35:48.122711  528268 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:35:48.484149  528268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:35:48.484161  528268 machine.go:97] duration metric: took 1.291517603s to provisionDockerMachine
	I1206 10:35:48.484171  528268 start.go:293] postStartSetup for "functional-123579" (driver="docker")
	I1206 10:35:48.484183  528268 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:35:48.484243  528268 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:35:48.484311  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.507680  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:48.615171  528268 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:35:48.618416  528268 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:35:48.618434  528268 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:35:48.618444  528268 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/addons for local assets ...
	I1206 10:35:48.618496  528268 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/files for local assets ...
	I1206 10:35:48.618569  528268 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> 4880682.pem in /etc/ssl/certs
	I1206 10:35:48.618650  528268 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts -> hosts in /etc/test/nested/copy/488068
	I1206 10:35:48.618693  528268 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488068
	I1206 10:35:48.626464  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:35:48.643882  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts --> /etc/test/nested/copy/488068/hosts (40 bytes)
	I1206 10:35:48.662582  528268 start.go:296] duration metric: took 178.395271ms for postStartSetup
	I1206 10:35:48.662675  528268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:35:48.662713  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.680751  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:48.784322  528268 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:35:48.789238  528268 fix.go:56] duration metric: took 1.616554387s for fixHost
	I1206 10:35:48.789253  528268 start.go:83] releasing machines lock for "functional-123579", held for 1.616594099s
	I1206 10:35:48.789324  528268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:35:48.807477  528268 ssh_runner.go:195] Run: cat /version.json
	I1206 10:35:48.807520  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.807562  528268 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:35:48.807618  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.828942  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:48.845083  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:49.020126  528268 ssh_runner.go:195] Run: systemctl --version
	I1206 10:35:49.026608  528268 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:35:49.065500  528268 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:35:49.069961  528268 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:35:49.070024  528268 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:35:49.077978  528268 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:35:49.077992  528268 start.go:496] detecting cgroup driver to use...
	I1206 10:35:49.078033  528268 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:35:49.078078  528268 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:35:49.093402  528268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:35:49.106707  528268 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:35:49.106771  528268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:35:49.122603  528268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:35:49.135424  528268 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:35:49.251969  528268 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:35:49.384025  528268 docker.go:234] disabling docker service ...
	I1206 10:35:49.384082  528268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:35:49.398904  528268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:35:49.412283  528268 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:35:49.535452  528268 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:35:49.651851  528268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:35:49.665735  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:35:49.680503  528268 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:35:49.680561  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.689947  528268 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:35:49.690006  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.699358  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.708725  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.718744  528268 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:35:49.727534  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.737013  528268 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.745582  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.754308  528268 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:35:49.762144  528268 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:35:49.769875  528268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:35:49.884338  528268 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:35:50.052236  528268 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:35:50.052348  528268 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:35:50.057582  528268 start.go:564] Will wait 60s for crictl version
	I1206 10:35:50.057651  528268 ssh_runner.go:195] Run: which crictl
	I1206 10:35:50.062638  528268 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:35:50.100652  528268 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:35:50.100743  528268 ssh_runner.go:195] Run: crio --version
	I1206 10:35:50.139579  528268 ssh_runner.go:195] Run: crio --version
	I1206 10:35:50.174800  528268 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:35:50.177732  528268 cli_runner.go:164] Run: docker network inspect functional-123579 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:35:50.194850  528268 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:35:50.201950  528268 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 10:35:50.204938  528268 kubeadm.go:884] updating cluster {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:35:50.205078  528268 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:35:50.205145  528268 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:35:50.240680  528268 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:35:50.240692  528268 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:35:50.240750  528268 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:35:50.267939  528268 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:35:50.267955  528268 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:35:50.267962  528268 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:35:50.268053  528268 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-123579 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:35:50.268129  528268 ssh_runner.go:195] Run: crio config
	I1206 10:35:50.326220  528268 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 10:35:50.326240  528268 cni.go:84] Creating CNI manager for ""
	I1206 10:35:50.326248  528268 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:35:50.326256  528268 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:35:50.326280  528268 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-123579 NodeName:functional-123579 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:35:50.326407  528268 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-123579"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:35:50.326477  528268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:35:50.334319  528268 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:35:50.334378  528268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:35:50.341826  528268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:35:50.354245  528268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:35:50.367015  528268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1206 10:35:50.379350  528268 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:35:50.382958  528268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:35:50.504018  528268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:35:50.930865  528268 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579 for IP: 192.168.49.2
	I1206 10:35:50.930875  528268 certs.go:195] generating shared ca certs ...
	I1206 10:35:50.930889  528268 certs.go:227] acquiring lock for ca certs: {Name:mk654f77abd8383620ce6ddae56f2a6a8c1d96d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:35:50.931046  528268 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key
	I1206 10:35:50.931093  528268 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key
	I1206 10:35:50.931099  528268 certs.go:257] generating profile certs ...
	I1206 10:35:50.931220  528268 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key
	I1206 10:35:50.931274  528268 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key.fda7c087
	I1206 10:35:50.931318  528268 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key
	I1206 10:35:50.931430  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem (1338 bytes)
	W1206 10:35:50.931460  528268 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068_empty.pem, impossibly tiny 0 bytes
	I1206 10:35:50.931466  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 10:35:50.931493  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:35:50.931515  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:35:50.931536  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem (1675 bytes)
	I1206 10:35:50.931577  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:35:50.932148  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:35:50.953643  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:35:50.975543  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:35:50.998708  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 10:35:51.019841  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:35:51.038179  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:35:51.055740  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:35:51.075573  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:35:51.094756  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /usr/share/ca-certificates/4880682.pem (1708 bytes)
	I1206 10:35:51.113922  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:35:51.132368  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem --> /usr/share/ca-certificates/488068.pem (1338 bytes)
	I1206 10:35:51.150650  528268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:35:51.163984  528268 ssh_runner.go:195] Run: openssl version
	I1206 10:35:51.171418  528268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4880682.pem
	I1206 10:35:51.179298  528268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4880682.pem /etc/ssl/certs/4880682.pem
	I1206 10:35:51.187013  528268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4880682.pem
	I1206 10:35:51.190756  528268 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 10:35:51.190814  528268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4880682.pem
	I1206 10:35:51.231889  528268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:35:51.239348  528268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:51.246609  528268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:35:51.254276  528268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:51.258574  528268 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:51.258631  528268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:51.301011  528268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:35:51.308790  528268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488068.pem
	I1206 10:35:51.316400  528268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488068.pem /etc/ssl/certs/488068.pem
	I1206 10:35:51.324195  528268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488068.pem
	I1206 10:35:51.328353  528268 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 10:35:51.328409  528268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488068.pem
	I1206 10:35:51.371753  528268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:35:51.379339  528268 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:35:51.383319  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:35:51.424469  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:35:51.465529  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:35:51.511345  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:35:51.565170  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:35:51.614532  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:35:51.665468  528268 kubeadm.go:401] StartCluster: {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:35:51.665553  528268 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:35:51.665612  528268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:35:51.699589  528268 cri.go:89] found id: ""
	I1206 10:35:51.699652  528268 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:35:51.708250  528268 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:35:51.708260  528268 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:35:51.708318  528268 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:35:51.716593  528268 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:35:51.717135  528268 kubeconfig.go:125] found "functional-123579" server: "https://192.168.49.2:8441"
	I1206 10:35:51.718506  528268 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:35:51.728290  528268 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 10:21:13.758601441 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 10:35:50.371679399 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1206 10:35:51.728307  528268 kubeadm.go:1161] stopping kube-system containers ...
	I1206 10:35:51.728319  528268 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 10:35:51.728381  528268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:35:51.763757  528268 cri.go:89] found id: ""
	I1206 10:35:51.763820  528268 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 10:35:51.777420  528268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:35:51.785097  528268 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  6 10:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  6 10:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  6 10:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  6 10:25 /etc/kubernetes/scheduler.conf
	
	I1206 10:35:51.785162  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:35:51.792642  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:35:51.800316  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:35:51.800387  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:35:51.808313  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:35:51.815662  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:35:51.815715  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:35:51.823153  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:35:51.831093  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:35:51.831167  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:35:51.838577  528268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:35:51.846346  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:51.894809  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:52.979571  528268 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.084737023s)
	I1206 10:35:52.979630  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:53.188528  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:53.255794  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:53.309672  528268 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:35:53.309740  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:53.810758  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:54.309899  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:54.810832  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:55.309958  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:55.809819  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:56.310103  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:56.809902  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:57.309923  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:57.809975  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:58.310731  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:58.809924  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:59.310585  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:59.810731  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:00.309923  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:00.810538  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:01.310473  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:01.810374  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:02.310412  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:02.809925  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:03.309918  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:03.810667  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:04.310497  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:04.810559  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:05.310616  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:05.810787  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:06.310760  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:06.810542  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:07.310481  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:07.810515  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:08.310271  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:08.810300  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:09.309935  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:09.809899  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:10.310756  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:10.809928  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:11.309919  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:11.809916  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:12.310322  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:12.809962  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:13.309904  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:13.809901  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:14.309825  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:14.809939  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:15.309858  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:15.810769  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:16.310915  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:16.809905  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:17.310298  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:17.809935  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:18.310774  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:18.810876  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:19.310588  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:19.810539  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:20.309961  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:20.810313  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:21.310718  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:21.810176  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:22.310761  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:22.809819  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:23.310605  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:23.810607  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:24.310709  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:24.810672  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:25.309883  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:25.810296  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:26.309901  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:26.810157  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:27.310838  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:27.810698  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:28.309956  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:28.809934  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:29.310713  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:29.810598  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:30.310564  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:30.809937  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:31.309915  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:31.810618  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:32.310478  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:32.809942  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:33.310175  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:33.810817  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:34.310221  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:34.810764  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:35.309907  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:35.810700  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:36.310275  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:36.810581  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:37.310397  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:37.809951  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:38.310518  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:38.810174  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:39.310213  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:39.810271  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:40.309911  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:40.810748  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:41.310557  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:41.810632  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:42.309870  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:42.810506  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:43.309942  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:43.810676  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:44.310713  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:44.810703  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:45.310440  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:45.810823  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:46.309845  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:46.810726  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:47.310769  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:47.809917  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:48.310694  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:48.810273  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:49.310273  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:49.810301  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:50.309899  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:50.809907  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:51.309963  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:51.810551  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:52.310532  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:52.810599  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:53.310630  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:36:53.310706  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:36:53.342266  528268 cri.go:89] found id: ""
	I1206 10:36:53.342280  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.342287  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:36:53.342292  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:36:53.342356  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:36:53.368755  528268 cri.go:89] found id: ""
	I1206 10:36:53.368774  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.368781  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:36:53.368785  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:36:53.368846  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:36:53.393431  528268 cri.go:89] found id: ""
	I1206 10:36:53.393447  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.393454  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:36:53.393459  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:36:53.393515  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:36:53.418954  528268 cri.go:89] found id: ""
	I1206 10:36:53.418967  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.418974  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:36:53.418979  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:36:53.419036  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:36:53.444726  528268 cri.go:89] found id: ""
	I1206 10:36:53.444740  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.444747  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:36:53.444752  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:36:53.444809  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:36:53.469041  528268 cri.go:89] found id: ""
	I1206 10:36:53.469054  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.469062  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:36:53.469067  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:36:53.469122  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:36:53.494455  528268 cri.go:89] found id: ""
	I1206 10:36:53.494468  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.494475  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:36:53.494483  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:36:53.494496  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:36:53.557127  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:36:53.549369   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.549959   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.551594   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.551939   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.553382   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:36:53.549369   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.549959   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.551594   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.551939   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.553382   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:36:53.557137  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:36:53.557148  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:36:53.629870  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:36:53.629900  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:36:53.661451  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:36:53.661466  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:36:53.730909  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:36:53.730927  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:36:56.247245  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:56.257306  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:36:56.257364  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:36:56.286141  528268 cri.go:89] found id: ""
	I1206 10:36:56.286155  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.286163  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:36:56.286168  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:36:56.286228  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:36:56.313467  528268 cri.go:89] found id: ""
	I1206 10:36:56.313481  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.313488  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:36:56.313499  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:36:56.313559  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:36:56.340777  528268 cri.go:89] found id: ""
	I1206 10:36:56.340791  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.340798  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:36:56.340803  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:36:56.340862  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:36:56.367085  528268 cri.go:89] found id: ""
	I1206 10:36:56.367099  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.367106  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:36:56.367111  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:36:56.367188  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:36:56.392392  528268 cri.go:89] found id: ""
	I1206 10:36:56.392407  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.392414  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:36:56.392420  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:36:56.392482  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:36:56.417786  528268 cri.go:89] found id: ""
	I1206 10:36:56.417799  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.417807  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:36:56.417812  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:36:56.417871  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:36:56.443872  528268 cri.go:89] found id: ""
	I1206 10:36:56.443886  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.443893  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:36:56.443901  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:36:56.443911  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:36:56.509704  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:36:56.509723  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:36:56.524726  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:36:56.524742  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:36:56.590779  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:36:56.582349   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.583075   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.584764   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.585326   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.586966   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:36:56.582349   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.583075   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.584764   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.585326   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.586966   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:36:56.590789  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:36:56.590799  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:36:56.657863  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:36:56.657883  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:36:59.188879  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:59.199665  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:36:59.199726  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:36:59.232126  528268 cri.go:89] found id: ""
	I1206 10:36:59.232140  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.232148  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:36:59.232153  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:36:59.232212  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:36:59.257550  528268 cri.go:89] found id: ""
	I1206 10:36:59.257564  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.257571  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:36:59.257576  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:36:59.257633  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:36:59.282608  528268 cri.go:89] found id: ""
	I1206 10:36:59.282623  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.282630  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:36:59.282636  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:36:59.282698  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:36:59.312791  528268 cri.go:89] found id: ""
	I1206 10:36:59.312806  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.312813  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:36:59.312819  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:36:59.312881  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:36:59.339361  528268 cri.go:89] found id: ""
	I1206 10:36:59.339376  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.339383  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:36:59.339388  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:36:59.339447  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:36:59.366255  528268 cri.go:89] found id: ""
	I1206 10:36:59.366269  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.366276  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:36:59.366281  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:36:59.366339  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:36:59.394131  528268 cri.go:89] found id: ""
	I1206 10:36:59.394145  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.394152  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:36:59.394172  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:36:59.394182  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:36:59.462514  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:36:59.462536  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:36:59.491731  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:36:59.491747  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:36:59.562406  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:36:59.562426  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:36:59.577286  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:36:59.577302  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:36:59.642145  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:36:59.633850   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.634393   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.636035   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.636643   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.638279   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:36:59.633850   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.634393   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.636035   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.636643   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.638279   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:02.143135  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:02.153343  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:02.153402  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:02.182430  528268 cri.go:89] found id: ""
	I1206 10:37:02.182453  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.182460  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:02.182466  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:02.182529  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:02.217140  528268 cri.go:89] found id: ""
	I1206 10:37:02.217164  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.217171  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:02.217176  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:02.217241  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:02.264761  528268 cri.go:89] found id: ""
	I1206 10:37:02.264775  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.264795  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:02.264800  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:02.264857  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:02.295104  528268 cri.go:89] found id: ""
	I1206 10:37:02.295118  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.295161  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:02.295166  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:02.295232  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:02.324690  528268 cri.go:89] found id: ""
	I1206 10:37:02.324704  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.324711  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:02.324716  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:02.324776  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:02.354165  528268 cri.go:89] found id: ""
	I1206 10:37:02.354179  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.354187  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:02.354192  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:02.354250  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:02.379657  528268 cri.go:89] found id: ""
	I1206 10:37:02.379671  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.379679  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:02.379686  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:02.379697  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:02.449725  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:02.449746  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:02.464766  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:02.464783  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:02.527444  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:02.518942   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.519712   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.521458   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.522038   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.523598   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:02.518942   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.519712   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.521458   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.522038   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.523598   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:02.527457  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:02.527467  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:02.595482  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:02.595503  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:05.126581  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:05.136725  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:05.136783  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:05.162008  528268 cri.go:89] found id: ""
	I1206 10:37:05.162022  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.162049  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:05.162055  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:05.162123  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:05.190290  528268 cri.go:89] found id: ""
	I1206 10:37:05.190305  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.190313  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:05.190318  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:05.190399  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:05.222971  528268 cri.go:89] found id: ""
	I1206 10:37:05.223000  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.223008  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:05.223013  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:05.223083  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:05.249192  528268 cri.go:89] found id: ""
	I1206 10:37:05.249206  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.249213  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:05.249218  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:05.249285  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:05.280084  528268 cri.go:89] found id: ""
	I1206 10:37:05.280097  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.280104  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:05.280110  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:05.280176  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:05.306008  528268 cri.go:89] found id: ""
	I1206 10:37:05.306036  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.306044  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:05.306049  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:05.306115  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:05.331829  528268 cri.go:89] found id: ""
	I1206 10:37:05.331843  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.331850  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:05.331858  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:05.331868  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:05.394775  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:05.386653   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.387484   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.389032   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.389488   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.390957   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:05.386653   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.387484   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.389032   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.389488   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.390957   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:05.394787  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:05.394798  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:05.463063  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:05.463082  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:05.496791  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:05.496808  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:05.562749  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:05.562768  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:08.077865  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:08.088556  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:08.088628  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:08.114942  528268 cri.go:89] found id: ""
	I1206 10:37:08.114956  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.114963  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:08.114969  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:08.115027  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:08.141141  528268 cri.go:89] found id: ""
	I1206 10:37:08.141155  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.141162  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:08.141167  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:08.141235  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:08.166303  528268 cri.go:89] found id: ""
	I1206 10:37:08.166318  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.166325  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:08.166334  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:08.166394  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:08.199234  528268 cri.go:89] found id: ""
	I1206 10:37:08.199248  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.199255  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:08.199260  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:08.199326  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:08.231753  528268 cri.go:89] found id: ""
	I1206 10:37:08.231767  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.231774  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:08.231780  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:08.231842  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:08.260152  528268 cri.go:89] found id: ""
	I1206 10:37:08.260166  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.260173  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:08.260179  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:08.260241  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:08.285346  528268 cri.go:89] found id: ""
	I1206 10:37:08.285360  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.285367  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:08.285378  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:08.285388  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:08.353719  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:08.353740  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:08.385085  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:08.385101  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:08.459734  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:08.459762  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:08.474846  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:08.474862  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:08.546432  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:08.537844   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.538577   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.540294   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.540933   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.542525   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:08.537844   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.538577   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.540294   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.540933   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.542525   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:11.048129  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:11.058654  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:11.058714  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:11.086873  528268 cri.go:89] found id: ""
	I1206 10:37:11.086889  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.086896  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:11.086903  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:11.086965  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:11.113880  528268 cri.go:89] found id: ""
	I1206 10:37:11.113904  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.113912  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:11.113918  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:11.113987  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:11.142338  528268 cri.go:89] found id: ""
	I1206 10:37:11.142361  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.142370  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:11.142375  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:11.142448  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:11.168341  528268 cri.go:89] found id: ""
	I1206 10:37:11.168355  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.168362  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:11.168368  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:11.168425  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:11.218236  528268 cri.go:89] found id: ""
	I1206 10:37:11.218277  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.218285  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:11.218290  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:11.218357  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:11.257366  528268 cri.go:89] found id: ""
	I1206 10:37:11.257379  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.257386  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:11.257391  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:11.257455  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:11.283202  528268 cri.go:89] found id: ""
	I1206 10:37:11.283224  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.283235  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:11.283251  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:11.283269  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:11.349630  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:11.349650  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:11.365578  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:11.365606  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:11.431959  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:11.422904   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.423556   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.425277   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.425941   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.427652   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:11.422904   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.423556   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.425277   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.425941   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.427652   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:11.431970  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:11.431981  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:11.502903  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:11.502922  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:14.032953  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:14.043177  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:14.043291  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:14.068855  528268 cri.go:89] found id: ""
	I1206 10:37:14.068870  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.068877  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:14.068882  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:14.068946  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:14.094277  528268 cri.go:89] found id: ""
	I1206 10:37:14.094290  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.094308  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:14.094315  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:14.094372  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:14.119916  528268 cri.go:89] found id: ""
	I1206 10:37:14.119930  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.119948  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:14.119954  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:14.120029  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:14.144999  528268 cri.go:89] found id: ""
	I1206 10:37:14.145012  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.145020  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:14.145026  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:14.145088  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:14.170372  528268 cri.go:89] found id: ""
	I1206 10:37:14.170386  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.170404  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:14.170409  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:14.170475  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:14.220015  528268 cri.go:89] found id: ""
	I1206 10:37:14.220029  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.220036  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:14.220041  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:14.220102  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:14.249187  528268 cri.go:89] found id: ""
	I1206 10:37:14.249201  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.249208  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:14.249216  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:14.249226  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:14.315809  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:14.315830  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:14.331228  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:14.331245  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:14.394665  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:14.386558   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.387326   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.388992   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.389309   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.390775   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:14.386558   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.387326   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.388992   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.389309   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.390775   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:14.394676  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:14.394686  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:14.466599  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:14.466623  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:16.996304  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:17.008394  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:17.008453  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:17.036500  528268 cri.go:89] found id: ""
	I1206 10:37:17.036513  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.036521  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:17.036526  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:17.036591  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:17.064759  528268 cri.go:89] found id: ""
	I1206 10:37:17.064773  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.064780  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:17.064785  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:17.064846  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:17.095263  528268 cri.go:89] found id: ""
	I1206 10:37:17.095276  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.095284  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:17.095300  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:17.095364  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:17.121651  528268 cri.go:89] found id: ""
	I1206 10:37:17.121665  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.121673  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:17.121678  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:17.121747  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:17.148683  528268 cri.go:89] found id: ""
	I1206 10:37:17.148697  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.148704  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:17.148711  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:17.148773  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:17.180504  528268 cri.go:89] found id: ""
	I1206 10:37:17.180518  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.180535  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:17.180542  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:17.180611  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:17.208816  528268 cri.go:89] found id: ""
	I1206 10:37:17.208830  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.208837  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:17.208844  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:17.208854  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:17.277798  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:17.277818  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:17.292728  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:17.292743  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:17.366791  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:17.357858   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.358712   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.360589   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.361199   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.362779   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:17.357858   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.358712   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.360589   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.361199   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.362779   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:17.366801  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:17.366812  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:17.434192  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:17.434212  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:19.971273  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:19.981226  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:19.981286  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:20.019762  528268 cri.go:89] found id: ""
	I1206 10:37:20.019777  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.019785  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:20.019791  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:20.019866  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:20.047256  528268 cri.go:89] found id: ""
	I1206 10:37:20.047270  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.047278  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:20.047283  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:20.047345  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:20.075694  528268 cri.go:89] found id: ""
	I1206 10:37:20.075708  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.075716  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:20.075721  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:20.075785  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:20.105896  528268 cri.go:89] found id: ""
	I1206 10:37:20.105910  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.105917  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:20.105922  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:20.105981  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:20.131910  528268 cri.go:89] found id: ""
	I1206 10:37:20.131923  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.131930  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:20.131935  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:20.131997  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:20.157115  528268 cri.go:89] found id: ""
	I1206 10:37:20.157129  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.157135  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:20.157140  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:20.157202  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:20.188374  528268 cri.go:89] found id: ""
	I1206 10:37:20.188394  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.188401  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:20.188423  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:20.188434  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:20.267587  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:20.267607  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:20.283222  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:20.283238  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:20.348772  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:20.340427   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.341070   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.342551   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.342988   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.344527   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:20.340427   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.341070   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.342551   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.342988   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.344527   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:20.348783  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:20.348796  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:20.415451  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:20.415474  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:22.948223  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:22.959160  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:22.959221  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:22.985131  528268 cri.go:89] found id: ""
	I1206 10:37:22.985144  528268 logs.go:282] 0 containers: []
	W1206 10:37:22.985151  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:22.985156  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:22.985242  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:23.012336  528268 cri.go:89] found id: ""
	I1206 10:37:23.012350  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.012358  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:23.012363  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:23.012433  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:23.037784  528268 cri.go:89] found id: ""
	I1206 10:37:23.037808  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.037816  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:23.037822  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:23.037899  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:23.066240  528268 cri.go:89] found id: ""
	I1206 10:37:23.066254  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.066262  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:23.066267  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:23.066335  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:23.090898  528268 cri.go:89] found id: ""
	I1206 10:37:23.090912  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.090921  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:23.090926  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:23.090993  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:23.116011  528268 cri.go:89] found id: ""
	I1206 10:37:23.116039  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.116047  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:23.116052  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:23.116127  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:23.140768  528268 cri.go:89] found id: ""
	I1206 10:37:23.140781  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.140788  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:23.140796  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:23.140806  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:23.210300  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:23.210319  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:23.229296  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:23.229311  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:23.297415  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:23.288972   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.289757   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.291364   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.291944   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.293619   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:23.288972   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.289757   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.291364   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.291944   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.293619   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:23.297428  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:23.297438  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:23.364180  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:23.364200  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:25.892120  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:25.902322  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:25.902381  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:25.931154  528268 cri.go:89] found id: ""
	I1206 10:37:25.931168  528268 logs.go:282] 0 containers: []
	W1206 10:37:25.931175  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:25.931180  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:25.931245  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:25.957709  528268 cri.go:89] found id: ""
	I1206 10:37:25.957724  528268 logs.go:282] 0 containers: []
	W1206 10:37:25.957731  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:25.957736  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:25.957793  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:25.985765  528268 cri.go:89] found id: ""
	I1206 10:37:25.985779  528268 logs.go:282] 0 containers: []
	W1206 10:37:25.985786  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:25.985791  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:25.985849  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:26.016739  528268 cri.go:89] found id: ""
	I1206 10:37:26.016859  528268 logs.go:282] 0 containers: []
	W1206 10:37:26.016867  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:26.016873  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:26.016945  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:26.043228  528268 cri.go:89] found id: ""
	I1206 10:37:26.043242  528268 logs.go:282] 0 containers: []
	W1206 10:37:26.043252  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:26.043258  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:26.043331  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:26.069862  528268 cri.go:89] found id: ""
	I1206 10:37:26.069888  528268 logs.go:282] 0 containers: []
	W1206 10:37:26.069896  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:26.069902  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:26.069979  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:26.097635  528268 cri.go:89] found id: ""
	I1206 10:37:26.097651  528268 logs.go:282] 0 containers: []
	W1206 10:37:26.097659  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:26.097666  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:26.097677  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:26.163107  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:26.163132  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:26.177703  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:26.177723  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:26.254904  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:26.246698   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.247514   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.249003   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.249473   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.250911   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:26.246698   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.247514   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.249003   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.249473   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.250911   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:26.254915  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:26.254927  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:26.322703  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:26.322723  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:28.850178  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:28.860819  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:28.860878  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:28.887162  528268 cri.go:89] found id: ""
	I1206 10:37:28.887175  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.887183  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:28.887188  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:28.887246  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:28.912223  528268 cri.go:89] found id: ""
	I1206 10:37:28.912237  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.912251  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:28.912256  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:28.912318  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:28.937893  528268 cri.go:89] found id: ""
	I1206 10:37:28.937907  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.937914  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:28.937920  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:28.937979  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:28.966798  528268 cri.go:89] found id: ""
	I1206 10:37:28.966812  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.966819  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:28.966825  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:28.966887  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:28.994392  528268 cri.go:89] found id: ""
	I1206 10:37:28.994406  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.994413  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:28.994418  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:28.994480  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:29.020703  528268 cri.go:89] found id: ""
	I1206 10:37:29.020718  528268 logs.go:282] 0 containers: []
	W1206 10:37:29.020725  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:29.020730  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:29.020788  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:29.049956  528268 cri.go:89] found id: ""
	I1206 10:37:29.049969  528268 logs.go:282] 0 containers: []
	W1206 10:37:29.049977  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:29.049986  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:29.049998  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:29.116113  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:29.116133  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:29.130937  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:29.130954  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:29.199649  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:29.191077   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.191848   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.193554   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.193889   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.195340   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:29.191077   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.191848   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.193554   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.193889   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.195340   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:29.199659  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:29.199670  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:29.271990  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:29.272011  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:31.801925  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:31.812057  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:31.812130  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:31.837642  528268 cri.go:89] found id: ""
	I1206 10:37:31.837656  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.837663  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:31.837668  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:31.837724  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:31.863706  528268 cri.go:89] found id: ""
	I1206 10:37:31.863721  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.863728  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:31.863733  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:31.863795  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:31.892284  528268 cri.go:89] found id: ""
	I1206 10:37:31.892298  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.892305  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:31.892310  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:31.892370  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:31.920973  528268 cri.go:89] found id: ""
	I1206 10:37:31.920987  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.920994  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:31.920999  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:31.921072  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:31.946196  528268 cri.go:89] found id: ""
	I1206 10:37:31.946209  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.946216  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:31.946221  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:31.946280  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:31.972154  528268 cri.go:89] found id: ""
	I1206 10:37:31.972168  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.972176  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:31.972182  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:31.972273  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:31.998166  528268 cri.go:89] found id: ""
	I1206 10:37:31.998179  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.998194  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:31.998202  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:31.998212  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:32.066002  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:32.066020  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:32.081440  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:32.081456  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:32.155010  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:32.146683   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.147230   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.149014   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.149511   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.151065   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:32.146683   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.147230   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.149014   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.149511   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.151065   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:32.155021  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:32.155032  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:32.239005  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:32.239035  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:34.779578  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:34.789994  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:34.790061  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:34.817069  528268 cri.go:89] found id: ""
	I1206 10:37:34.817083  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.817091  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:34.817096  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:34.817154  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:34.843456  528268 cri.go:89] found id: ""
	I1206 10:37:34.843470  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.843478  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:34.843483  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:34.843540  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:34.873150  528268 cri.go:89] found id: ""
	I1206 10:37:34.873164  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.873171  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:34.873176  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:34.873236  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:34.901463  528268 cri.go:89] found id: ""
	I1206 10:37:34.901476  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.901483  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:34.901489  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:34.901546  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:34.930362  528268 cri.go:89] found id: ""
	I1206 10:37:34.930376  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.930383  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:34.930389  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:34.930460  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:34.955907  528268 cri.go:89] found id: ""
	I1206 10:37:34.955920  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.955928  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:34.955936  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:34.955997  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:34.981646  528268 cri.go:89] found id: ""
	I1206 10:37:34.981660  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.981667  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:34.981676  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:34.981690  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:35.051925  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:35.051946  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:35.067379  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:35.067395  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:35.132911  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:35.124444   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.125082   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.126771   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.127367   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.128903   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:35.124444   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.125082   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.126771   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.127367   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.128903   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:35.132921  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:35.132932  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:35.203071  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:35.203091  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:37.738787  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:37.749325  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:37.749395  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:37.777933  528268 cri.go:89] found id: ""
	I1206 10:37:37.777947  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.777955  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:37.777961  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:37.778018  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:37.803626  528268 cri.go:89] found id: ""
	I1206 10:37:37.803640  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.803647  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:37.803652  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:37.803711  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:37.829518  528268 cri.go:89] found id: ""
	I1206 10:37:37.829532  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.829540  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:37.829545  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:37.829608  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:37.854832  528268 cri.go:89] found id: ""
	I1206 10:37:37.854846  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.854853  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:37.854858  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:37.854918  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:37.879627  528268 cri.go:89] found id: ""
	I1206 10:37:37.879641  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.879649  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:37.879654  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:37.879712  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:37.906054  528268 cri.go:89] found id: ""
	I1206 10:37:37.906067  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.906074  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:37.906080  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:37.906137  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:37.931611  528268 cri.go:89] found id: ""
	I1206 10:37:37.931624  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.931632  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:37.931640  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:37.931651  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:37.997740  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:37.997760  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:38.023284  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:38.023303  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:38.091986  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:38.082741   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.083460   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.085430   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.086101   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.087877   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:38.082741   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.083460   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.085430   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.086101   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.087877   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:38.092014  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:38.092027  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:38.163320  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:38.163343  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:40.709445  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:40.720016  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:40.720077  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:40.745539  528268 cri.go:89] found id: ""
	I1206 10:37:40.745554  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.745561  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:40.745566  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:40.745630  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:40.775524  528268 cri.go:89] found id: ""
	I1206 10:37:40.775538  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.775546  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:40.775552  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:40.775612  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:40.800974  528268 cri.go:89] found id: ""
	I1206 10:37:40.800988  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.800995  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:40.801001  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:40.801064  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:40.825855  528268 cri.go:89] found id: ""
	I1206 10:37:40.825869  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.825877  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:40.825882  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:40.825940  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:40.856039  528268 cri.go:89] found id: ""
	I1206 10:37:40.856052  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.856059  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:40.856064  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:40.856129  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:40.886499  528268 cri.go:89] found id: ""
	I1206 10:37:40.886513  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.886520  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:40.886527  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:40.886586  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:40.913975  528268 cri.go:89] found id: ""
	I1206 10:37:40.913989  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.913996  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:40.914004  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:40.914014  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:40.979882  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:40.979904  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:40.995137  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:40.995155  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:41.060228  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:41.051325   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.052002   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.053633   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.054141   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.055869   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:41.051325   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.052002   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.053633   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.054141   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.055869   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:41.060245  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:41.060258  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:41.130025  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:41.130046  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:43.659238  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:43.669354  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:43.669430  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:43.694872  528268 cri.go:89] found id: ""
	I1206 10:37:43.694886  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.694893  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:43.694899  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:43.694956  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:43.720265  528268 cri.go:89] found id: ""
	I1206 10:37:43.720278  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.720286  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:43.720290  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:43.720349  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:43.746213  528268 cri.go:89] found id: ""
	I1206 10:37:43.746226  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.746234  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:43.746239  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:43.746300  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:43.771902  528268 cri.go:89] found id: ""
	I1206 10:37:43.771916  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.771923  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:43.771928  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:43.771984  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:43.797840  528268 cri.go:89] found id: ""
	I1206 10:37:43.797854  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.797874  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:43.797879  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:43.797949  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:43.823569  528268 cri.go:89] found id: ""
	I1206 10:37:43.823583  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.823590  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:43.823596  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:43.823654  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:43.850154  528268 cri.go:89] found id: ""
	I1206 10:37:43.850169  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.850187  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:43.850196  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:43.850207  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:43.919668  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:43.919690  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:43.954253  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:43.954269  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:44.019533  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:44.019556  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:44.034911  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:44.034930  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:44.098130  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:44.089450   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.090461   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.091451   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.092313   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.093171   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:44.089450   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.090461   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.091451   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.092313   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.093171   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:46.599796  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:46.610343  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:46.610410  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:46.637289  528268 cri.go:89] found id: ""
	I1206 10:37:46.637304  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.637311  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:46.637317  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:46.637380  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:46.664098  528268 cri.go:89] found id: ""
	I1206 10:37:46.664112  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.664118  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:46.664123  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:46.664183  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:46.693606  528268 cri.go:89] found id: ""
	I1206 10:37:46.693619  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.693638  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:46.693644  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:46.693718  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:46.719425  528268 cri.go:89] found id: ""
	I1206 10:37:46.719438  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.719445  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:46.719451  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:46.719511  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:46.748960  528268 cri.go:89] found id: ""
	I1206 10:37:46.748974  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.748982  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:46.748987  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:46.749047  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:46.782749  528268 cri.go:89] found id: ""
	I1206 10:37:46.782763  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.782770  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:46.782776  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:46.782846  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:46.807615  528268 cri.go:89] found id: ""
	I1206 10:37:46.807629  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.807636  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:46.807644  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:46.807654  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:46.838618  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:46.838634  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:46.905518  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:46.905537  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:46.920399  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:46.920417  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:46.985957  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:46.978179   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.978741   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.980269   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.980715   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.982218   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:46.978179   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.978741   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.980269   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.980715   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.982218   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:46.985968  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:46.985981  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:49.555258  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:49.565209  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:49.565266  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:49.593833  528268 cri.go:89] found id: ""
	I1206 10:37:49.593846  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.593853  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:49.593858  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:49.593914  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:49.621098  528268 cri.go:89] found id: ""
	I1206 10:37:49.621111  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.621119  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:49.621124  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:49.621203  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:49.645669  528268 cri.go:89] found id: ""
	I1206 10:37:49.645681  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.645689  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:49.645694  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:49.645750  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:49.672058  528268 cri.go:89] found id: ""
	I1206 10:37:49.672072  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.672080  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:49.672085  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:49.672140  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:49.696988  528268 cri.go:89] found id: ""
	I1206 10:37:49.697002  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.697009  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:49.697015  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:49.697076  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:49.723261  528268 cri.go:89] found id: ""
	I1206 10:37:49.723275  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.723282  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:49.723287  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:49.723357  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:49.750307  528268 cri.go:89] found id: ""
	I1206 10:37:49.750321  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.750328  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:49.750336  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:49.750346  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:49.765699  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:49.765721  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:49.827929  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:49.819281   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.820177   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.821896   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.822193   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.823677   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:49.819281   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.820177   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.821896   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.822193   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.823677   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:49.827938  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:49.827962  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:49.899802  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:49.899820  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:49.928018  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:49.928035  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:52.495744  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:52.505888  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:52.505958  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:52.532610  528268 cri.go:89] found id: ""
	I1206 10:37:52.532623  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.532631  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:52.532636  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:52.532695  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:52.558679  528268 cri.go:89] found id: ""
	I1206 10:37:52.558692  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.558700  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:52.558705  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:52.558762  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:52.585203  528268 cri.go:89] found id: ""
	I1206 10:37:52.585217  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.585225  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:52.585230  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:52.585286  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:52.611483  528268 cri.go:89] found id: ""
	I1206 10:37:52.611496  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.611503  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:52.611510  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:52.611568  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:52.638054  528268 cri.go:89] found id: ""
	I1206 10:37:52.638067  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.638075  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:52.638080  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:52.638137  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:52.666746  528268 cri.go:89] found id: ""
	I1206 10:37:52.666760  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.666767  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:52.666773  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:52.666833  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:52.691974  528268 cri.go:89] found id: ""
	I1206 10:37:52.691997  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.692005  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:52.692015  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:52.692025  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:52.761093  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:52.761113  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:52.790376  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:52.790392  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:52.858897  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:52.858915  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:52.873906  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:52.873923  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:52.937907  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:52.929773   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.930648   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.932194   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.932561   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.934055   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:52.929773   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.930648   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.932194   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.932561   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.934055   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:55.439279  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:55.450466  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:55.450529  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:55.483494  528268 cri.go:89] found id: ""
	I1206 10:37:55.483508  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.483515  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:55.483520  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:55.483576  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:55.515860  528268 cri.go:89] found id: ""
	I1206 10:37:55.515874  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.515881  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:55.515886  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:55.515942  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:55.542224  528268 cri.go:89] found id: ""
	I1206 10:37:55.542239  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.542248  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:55.542253  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:55.542311  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:55.567547  528268 cri.go:89] found id: ""
	I1206 10:37:55.567561  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.567568  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:55.567574  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:55.567630  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:55.594478  528268 cri.go:89] found id: ""
	I1206 10:37:55.594491  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.594499  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:55.594505  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:55.594568  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:55.620118  528268 cri.go:89] found id: ""
	I1206 10:37:55.620132  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.620146  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:55.620151  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:55.620210  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:55.644692  528268 cri.go:89] found id: ""
	I1206 10:37:55.644706  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.644713  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:55.644721  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:55.644732  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:55.712056  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:55.702146   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.702755   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.704324   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.704667   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.708009   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:55.702146   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.702755   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.704324   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.704667   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.708009   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:55.712075  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:55.712085  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:55.782393  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:55.782414  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:55.817896  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:55.817913  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:55.892357  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:55.892385  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:58.407847  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:58.417968  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:58.418026  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:58.446859  528268 cri.go:89] found id: ""
	I1206 10:37:58.446872  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.446879  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:58.446884  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:58.446946  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:58.475161  528268 cri.go:89] found id: ""
	I1206 10:37:58.475175  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.475182  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:58.475187  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:58.475244  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:58.503498  528268 cri.go:89] found id: ""
	I1206 10:37:58.503513  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.503520  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:58.503525  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:58.503583  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:58.529955  528268 cri.go:89] found id: ""
	I1206 10:37:58.529970  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.529977  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:58.529983  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:58.530038  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:58.557174  528268 cri.go:89] found id: ""
	I1206 10:37:58.557188  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.557196  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:58.557201  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:58.557259  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:58.586116  528268 cri.go:89] found id: ""
	I1206 10:37:58.586130  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.586149  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:58.586156  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:58.586211  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:58.620339  528268 cri.go:89] found id: ""
	I1206 10:37:58.620353  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.620361  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:58.620368  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:58.620379  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:58.686086  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:58.686105  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:58.700471  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:58.700487  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:58.772759  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:58.764751   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.765482   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.767041   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.767492   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.769066   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:58.764751   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.765482   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.767041   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.767492   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.769066   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:58.772768  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:58.772779  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:58.841699  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:58.841718  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:01.372136  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:01.382712  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:01.382776  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:01.410577  528268 cri.go:89] found id: ""
	I1206 10:38:01.410591  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.410598  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:01.410603  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:01.410666  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:01.444228  528268 cri.go:89] found id: ""
	I1206 10:38:01.444251  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.444258  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:01.444264  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:01.444331  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:01.486632  528268 cri.go:89] found id: ""
	I1206 10:38:01.486645  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.486652  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:01.486657  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:01.486717  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:01.518190  528268 cri.go:89] found id: ""
	I1206 10:38:01.518203  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.518210  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:01.518215  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:01.518276  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:01.543942  528268 cri.go:89] found id: ""
	I1206 10:38:01.543956  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.543963  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:01.543968  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:01.544032  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:01.569769  528268 cri.go:89] found id: ""
	I1206 10:38:01.569803  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.569832  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:01.569845  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:01.569902  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:01.594441  528268 cri.go:89] found id: ""
	I1206 10:38:01.594456  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.594463  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:01.594471  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:01.594482  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:01.609124  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:01.609139  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:01.671291  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:01.663080   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.663834   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.665465   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.665773   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.667299   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:01.663080   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.663834   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.665465   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.665773   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.667299   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:01.671302  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:01.671312  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:01.739749  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:01.739769  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:01.768671  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:01.768687  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:04.339038  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:04.349363  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:04.349432  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:04.375032  528268 cri.go:89] found id: ""
	I1206 10:38:04.375045  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.375052  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:04.375058  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:04.375139  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:04.399997  528268 cri.go:89] found id: ""
	I1206 10:38:04.400011  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.400018  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:04.400023  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:04.400081  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:04.424851  528268 cri.go:89] found id: ""
	I1206 10:38:04.424876  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.424884  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:04.424889  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:04.424959  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:04.453149  528268 cri.go:89] found id: ""
	I1206 10:38:04.453162  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.453170  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:04.453175  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:04.453263  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:04.483514  528268 cri.go:89] found id: ""
	I1206 10:38:04.483527  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.483534  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:04.483540  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:04.483598  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:04.511967  528268 cri.go:89] found id: ""
	I1206 10:38:04.511980  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.511987  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:04.511993  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:04.512048  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:04.541164  528268 cri.go:89] found id: ""
	I1206 10:38:04.541175  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.541182  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:04.541190  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:04.541199  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:04.575975  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:04.575991  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:04.642763  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:04.642781  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:04.657313  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:04.657336  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:04.721928  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:04.713076   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.713820   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.715564   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.716200   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.717981   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:04.713076   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.713820   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.715564   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.716200   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.717981   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:04.721939  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:04.721952  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:07.293453  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:07.303645  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:07.303708  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:07.329285  528268 cri.go:89] found id: ""
	I1206 10:38:07.329299  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.329306  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:07.329313  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:07.329371  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:07.354889  528268 cri.go:89] found id: ""
	I1206 10:38:07.354903  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.354911  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:07.354916  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:07.354975  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:07.380496  528268 cri.go:89] found id: ""
	I1206 10:38:07.380510  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.380518  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:07.380523  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:07.380583  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:07.408252  528268 cri.go:89] found id: ""
	I1206 10:38:07.408265  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.408272  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:07.408278  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:07.408341  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:07.434563  528268 cri.go:89] found id: ""
	I1206 10:38:07.434577  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.434584  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:07.434590  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:07.434656  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:07.465668  528268 cri.go:89] found id: ""
	I1206 10:38:07.465681  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.465688  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:07.465694  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:07.465755  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:07.496206  528268 cri.go:89] found id: ""
	I1206 10:38:07.496220  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.496227  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:07.496252  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:07.496291  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:07.561228  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:07.561250  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:07.576434  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:07.576450  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:07.645534  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:07.637588   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.638151   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.639755   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.640208   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.641673   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:07.637588   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.638151   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.639755   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.640208   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.641673   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:07.645544  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:07.645555  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:07.713688  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:07.713708  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:10.250054  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:10.260518  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:10.260577  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:10.287264  528268 cri.go:89] found id: ""
	I1206 10:38:10.287283  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.287291  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:10.287296  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:10.287358  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:10.312333  528268 cri.go:89] found id: ""
	I1206 10:38:10.312347  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.312355  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:10.312360  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:10.312420  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:10.336978  528268 cri.go:89] found id: ""
	I1206 10:38:10.336993  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.337000  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:10.337004  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:10.337069  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:10.363441  528268 cri.go:89] found id: ""
	I1206 10:38:10.363455  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.363463  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:10.363468  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:10.363526  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:10.388225  528268 cri.go:89] found id: ""
	I1206 10:38:10.388245  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.388253  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:10.388259  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:10.388320  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:10.414362  528268 cri.go:89] found id: ""
	I1206 10:38:10.414375  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.414382  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:10.414388  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:10.414445  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:10.454478  528268 cri.go:89] found id: ""
	I1206 10:38:10.454491  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.454499  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:10.454508  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:10.454518  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:10.524830  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:10.524851  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:10.540277  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:10.540292  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:10.607931  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:10.599410   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.600137   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.601764   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.602052   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.604157   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:10.599410   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.600137   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.601764   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.602052   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.604157   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:10.607942  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:10.607955  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:10.675104  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:10.675134  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:13.206837  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:13.217943  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:13.218002  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:13.243670  528268 cri.go:89] found id: ""
	I1206 10:38:13.243684  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.243691  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:13.243697  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:13.243758  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:13.268428  528268 cri.go:89] found id: ""
	I1206 10:38:13.268443  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.268450  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:13.268455  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:13.268512  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:13.294024  528268 cri.go:89] found id: ""
	I1206 10:38:13.294038  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.294045  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:13.294050  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:13.294106  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:13.321522  528268 cri.go:89] found id: ""
	I1206 10:38:13.321536  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.321543  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:13.321548  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:13.321610  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:13.351214  528268 cri.go:89] found id: ""
	I1206 10:38:13.351228  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.351235  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:13.351240  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:13.351299  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:13.376433  528268 cri.go:89] found id: ""
	I1206 10:38:13.376447  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.376454  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:13.376459  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:13.376520  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:13.405980  528268 cri.go:89] found id: ""
	I1206 10:38:13.405994  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.406001  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:13.406009  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:13.406019  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:13.481314  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:13.481334  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:13.503361  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:13.503378  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:13.570756  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:13.562069   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.562777   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.564575   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.565306   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.566790   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:13.562069   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.562777   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.564575   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.565306   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.566790   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:13.570765  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:13.570778  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:13.641258  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:13.641282  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:16.171913  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:16.182483  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:16.182545  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:16.210129  528268 cri.go:89] found id: ""
	I1206 10:38:16.210143  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.210151  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:16.210156  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:16.210217  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:16.237040  528268 cri.go:89] found id: ""
	I1206 10:38:16.237060  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.237067  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:16.237073  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:16.237134  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:16.263801  528268 cri.go:89] found id: ""
	I1206 10:38:16.263815  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.263822  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:16.263827  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:16.263886  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:16.289263  528268 cri.go:89] found id: ""
	I1206 10:38:16.289277  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.289284  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:16.289289  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:16.289347  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:16.317849  528268 cri.go:89] found id: ""
	I1206 10:38:16.317862  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.317870  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:16.317875  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:16.317933  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:16.347303  528268 cri.go:89] found id: ""
	I1206 10:38:16.347317  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.347324  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:16.347329  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:16.347387  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:16.373512  528268 cri.go:89] found id: ""
	I1206 10:38:16.373525  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.373542  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:16.373552  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:16.373568  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:16.438751  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:16.438769  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:16.455447  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:16.455463  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:16.527176  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:16.518992   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.519800   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.521522   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.522056   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.523116   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:16.518992   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.519800   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.521522   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.522056   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.523116   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:16.527186  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:16.527196  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:16.595033  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:16.595053  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:19.127162  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:19.137626  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:19.137685  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:19.168715  528268 cri.go:89] found id: ""
	I1206 10:38:19.168729  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.168736  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:19.168741  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:19.168798  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:19.199324  528268 cri.go:89] found id: ""
	I1206 10:38:19.199341  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.199354  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:19.199359  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:19.199418  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:19.225589  528268 cri.go:89] found id: ""
	I1206 10:38:19.225601  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.225608  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:19.225613  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:19.225670  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:19.251399  528268 cri.go:89] found id: ""
	I1206 10:38:19.251412  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.251420  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:19.251425  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:19.251488  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:19.276108  528268 cri.go:89] found id: ""
	I1206 10:38:19.276122  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.276129  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:19.276134  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:19.276193  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:19.301269  528268 cri.go:89] found id: ""
	I1206 10:38:19.301282  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.301290  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:19.301295  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:19.301352  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:19.327537  528268 cri.go:89] found id: ""
	I1206 10:38:19.327552  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.327559  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:19.327568  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:19.327578  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:19.398088  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:19.398114  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:19.413590  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:19.413609  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:19.517843  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:19.509322   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.509746   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.511448   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.511962   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.513543   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:19.509322   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.509746   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.511448   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.511962   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.513543   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:19.517853  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:19.517866  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:19.587464  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:19.587485  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:22.115984  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:22.126048  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:22.126111  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:22.152880  528268 cri.go:89] found id: ""
	I1206 10:38:22.152893  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.152900  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:22.152905  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:22.152961  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:22.179175  528268 cri.go:89] found id: ""
	I1206 10:38:22.179190  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.179197  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:22.179202  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:22.179263  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:22.204543  528268 cri.go:89] found id: ""
	I1206 10:38:22.204557  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.204565  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:22.204570  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:22.204631  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:22.229269  528268 cri.go:89] found id: ""
	I1206 10:38:22.229283  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.229291  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:22.229296  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:22.229353  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:22.255404  528268 cri.go:89] found id: ""
	I1206 10:38:22.255418  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.255425  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:22.255430  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:22.255488  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:22.280965  528268 cri.go:89] found id: ""
	I1206 10:38:22.280981  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.280988  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:22.280994  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:22.281052  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:22.309901  528268 cri.go:89] found id: ""
	I1206 10:38:22.309915  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.309922  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:22.309930  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:22.309940  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:22.382110  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:22.382130  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:22.412045  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:22.412060  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:22.485902  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:22.485921  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:22.501637  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:22.501655  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:22.572937  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:22.565172   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.565547   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.567025   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.567515   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.569137   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:22.565172   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.565547   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.567025   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.567515   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.569137   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:25.074598  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:25.085017  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:25.085084  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:25.110479  528268 cri.go:89] found id: ""
	I1206 10:38:25.110493  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.110500  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:25.110506  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:25.110566  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:25.137467  528268 cri.go:89] found id: ""
	I1206 10:38:25.137481  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.137488  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:25.137493  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:25.137552  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:25.163017  528268 cri.go:89] found id: ""
	I1206 10:38:25.163033  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.163040  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:25.163046  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:25.163105  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:25.193876  528268 cri.go:89] found id: ""
	I1206 10:38:25.193890  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.193898  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:25.193903  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:25.193966  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:25.220362  528268 cri.go:89] found id: ""
	I1206 10:38:25.220376  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.220383  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:25.220388  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:25.220444  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:25.246057  528268 cri.go:89] found id: ""
	I1206 10:38:25.246070  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.246078  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:25.246083  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:25.246140  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:25.273646  528268 cri.go:89] found id: ""
	I1206 10:38:25.273660  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.273667  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:25.273675  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:25.273691  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:25.341507  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:25.341527  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:25.356890  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:25.356906  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:25.432607  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:25.423528   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.424336   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.425943   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.426718   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.428396   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:25.423528   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.424336   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.425943   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.426718   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.428396   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:25.432617  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:25.432628  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:25.515030  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:25.515052  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:28.053670  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:28.064577  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:28.064641  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:28.091082  528268 cri.go:89] found id: ""
	I1206 10:38:28.091097  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.091106  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:28.091111  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:28.091205  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:28.116793  528268 cri.go:89] found id: ""
	I1206 10:38:28.116808  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.116815  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:28.116822  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:28.116881  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:28.145938  528268 cri.go:89] found id: ""
	I1206 10:38:28.145952  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.145960  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:28.145965  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:28.146025  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:28.171742  528268 cri.go:89] found id: ""
	I1206 10:38:28.171755  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.171763  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:28.171768  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:28.171826  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:28.197528  528268 cri.go:89] found id: ""
	I1206 10:38:28.197542  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.197549  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:28.197554  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:28.197613  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:28.224277  528268 cri.go:89] found id: ""
	I1206 10:38:28.224291  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.224298  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:28.224303  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:28.224368  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:28.252201  528268 cri.go:89] found id: ""
	I1206 10:38:28.252215  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.252223  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:28.252237  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:28.252248  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:28.284626  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:28.284642  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:28.351035  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:28.351055  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:28.366043  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:28.366061  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:28.437473  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:28.427946   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.428958   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.430082   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.430867   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.432711   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:28.427946   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.428958   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.430082   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.430867   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.432711   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:28.437483  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:28.437506  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:31.019982  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:31.030426  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:31.030488  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:31.055406  528268 cri.go:89] found id: ""
	I1206 10:38:31.055419  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.055427  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:31.055432  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:31.055490  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:31.081639  528268 cri.go:89] found id: ""
	I1206 10:38:31.081653  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.081660  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:31.081665  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:31.081729  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:31.111871  528268 cri.go:89] found id: ""
	I1206 10:38:31.111886  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.111894  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:31.111899  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:31.111959  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:31.142949  528268 cri.go:89] found id: ""
	I1206 10:38:31.142964  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.142971  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:31.142977  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:31.143042  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:31.169930  528268 cri.go:89] found id: ""
	I1206 10:38:31.169946  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.169954  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:31.169959  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:31.170020  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:31.196019  528268 cri.go:89] found id: ""
	I1206 10:38:31.196033  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.196041  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:31.196046  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:31.196104  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:31.226526  528268 cri.go:89] found id: ""
	I1206 10:38:31.226540  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.226547  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:31.226556  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:31.226567  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:31.289723  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:31.280542   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.281325   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.283214   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.283972   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.285734   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:31.280542   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.281325   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.283214   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.283972   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.285734   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:31.289733  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:31.289746  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:31.358922  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:31.358941  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:31.387252  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:31.387268  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:31.460730  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:31.460749  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:33.977403  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:33.987866  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:33.987933  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:34.023637  528268 cri.go:89] found id: ""
	I1206 10:38:34.023651  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.023659  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:34.023664  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:34.023728  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:34.052242  528268 cri.go:89] found id: ""
	I1206 10:38:34.052256  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.052263  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:34.052269  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:34.052330  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:34.077707  528268 cri.go:89] found id: ""
	I1206 10:38:34.077721  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.077728  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:34.077734  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:34.077795  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:34.103066  528268 cri.go:89] found id: ""
	I1206 10:38:34.103079  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.103098  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:34.103103  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:34.103185  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:34.132994  528268 cri.go:89] found id: ""
	I1206 10:38:34.133007  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.133015  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:34.133020  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:34.133081  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:34.159017  528268 cri.go:89] found id: ""
	I1206 10:38:34.159030  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.159038  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:34.159043  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:34.159101  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:34.185998  528268 cri.go:89] found id: ""
	I1206 10:38:34.186012  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.186020  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:34.186028  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:34.186042  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:34.257644  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:34.257664  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:34.273073  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:34.273092  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:34.344235  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:34.334637   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.335521   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.337181   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.337760   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.339605   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:34.334637   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.335521   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.337181   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.337760   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.339605   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:34.344247  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:34.344260  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:34.414848  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:34.414867  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:36.966180  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:36.976392  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:36.976457  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:37.002549  528268 cri.go:89] found id: ""
	I1206 10:38:37.002566  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.002574  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:37.002580  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:37.002657  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:37.033009  528268 cri.go:89] found id: ""
	I1206 10:38:37.033024  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.033031  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:37.033037  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:37.033106  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:37.059257  528268 cri.go:89] found id: ""
	I1206 10:38:37.059271  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.059279  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:37.059285  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:37.059346  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:37.090436  528268 cri.go:89] found id: ""
	I1206 10:38:37.090449  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.090457  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:37.090462  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:37.090523  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:37.118194  528268 cri.go:89] found id: ""
	I1206 10:38:37.118208  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.118215  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:37.118222  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:37.118284  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:37.144022  528268 cri.go:89] found id: ""
	I1206 10:38:37.144036  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.144044  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:37.144049  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:37.144107  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:37.168416  528268 cri.go:89] found id: ""
	I1206 10:38:37.168430  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.168438  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:37.168445  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:37.168456  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:37.234878  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:37.234898  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:37.250351  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:37.250374  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:37.316139  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:37.307238   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.308163   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.309976   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.310399   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.312153   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:37.307238   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.308163   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.309976   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.310399   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.312153   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:37.316149  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:37.316159  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:37.385780  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:37.385800  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:39.916327  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:39.926345  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:39.926412  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:39.953639  528268 cri.go:89] found id: ""
	I1206 10:38:39.953652  528268 logs.go:282] 0 containers: []
	W1206 10:38:39.953660  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:39.953671  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:39.953732  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:39.979049  528268 cri.go:89] found id: ""
	I1206 10:38:39.979064  528268 logs.go:282] 0 containers: []
	W1206 10:38:39.979072  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:39.979077  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:39.979164  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:40.013684  528268 cri.go:89] found id: ""
	I1206 10:38:40.013700  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.013708  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:40.013714  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:40.013783  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:40.052804  528268 cri.go:89] found id: ""
	I1206 10:38:40.052820  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.052828  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:40.052834  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:40.052902  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:40.084356  528268 cri.go:89] found id: ""
	I1206 10:38:40.084372  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.084380  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:40.084386  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:40.084451  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:40.112282  528268 cri.go:89] found id: ""
	I1206 10:38:40.112297  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.112304  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:40.112312  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:40.112373  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:40.140065  528268 cri.go:89] found id: ""
	I1206 10:38:40.140080  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.140087  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:40.140094  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:40.140108  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:40.208521  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:40.199450   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.200296   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.202102   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.202795   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.204574   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:40.199450   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.200296   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.202102   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.202795   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.204574   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:40.208530  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:40.208541  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:40.280105  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:40.280126  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:40.313393  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:40.313409  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:40.380769  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:40.380789  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:42.896735  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:42.906913  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:42.906971  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:42.932466  528268 cri.go:89] found id: ""
	I1206 10:38:42.932480  528268 logs.go:282] 0 containers: []
	W1206 10:38:42.932493  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:42.932499  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:42.932560  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:42.962618  528268 cri.go:89] found id: ""
	I1206 10:38:42.962633  528268 logs.go:282] 0 containers: []
	W1206 10:38:42.962641  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:42.962647  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:42.962704  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:42.989497  528268 cri.go:89] found id: ""
	I1206 10:38:42.989511  528268 logs.go:282] 0 containers: []
	W1206 10:38:42.989519  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:42.989525  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:42.989581  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:43.016798  528268 cri.go:89] found id: ""
	I1206 10:38:43.016818  528268 logs.go:282] 0 containers: []
	W1206 10:38:43.016825  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:43.016831  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:43.017042  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:43.044571  528268 cri.go:89] found id: ""
	I1206 10:38:43.044589  528268 logs.go:282] 0 containers: []
	W1206 10:38:43.044599  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:43.044606  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:43.044679  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:43.072240  528268 cri.go:89] found id: ""
	I1206 10:38:43.072256  528268 logs.go:282] 0 containers: []
	W1206 10:38:43.072264  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:43.072269  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:43.072330  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:43.098196  528268 cri.go:89] found id: ""
	I1206 10:38:43.098211  528268 logs.go:282] 0 containers: []
	W1206 10:38:43.098218  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:43.098225  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:43.098237  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:43.113559  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:43.113577  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:43.177585  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:43.169460   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.169877   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.171569   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.172135   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.173643   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:43.169460   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.169877   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.171569   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.172135   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.173643   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:43.177595  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:43.177606  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:43.251189  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:43.251210  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:43.278658  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:43.278673  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:45.849509  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:45.861204  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:45.861266  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:45.888209  528268 cri.go:89] found id: ""
	I1206 10:38:45.888228  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.888236  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:45.888241  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:45.888306  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:45.913344  528268 cri.go:89] found id: ""
	I1206 10:38:45.913357  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.913365  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:45.913370  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:45.913429  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:45.939830  528268 cri.go:89] found id: ""
	I1206 10:38:45.939844  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.939852  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:45.939857  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:45.939927  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:45.964893  528268 cri.go:89] found id: ""
	I1206 10:38:45.964907  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.964914  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:45.964920  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:45.964984  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:45.991528  528268 cri.go:89] found id: ""
	I1206 10:38:45.991540  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.991548  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:45.991553  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:45.991614  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:46.018162  528268 cri.go:89] found id: ""
	I1206 10:38:46.018176  528268 logs.go:282] 0 containers: []
	W1206 10:38:46.018184  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:46.018190  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:46.018249  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:46.045784  528268 cri.go:89] found id: ""
	I1206 10:38:46.045807  528268 logs.go:282] 0 containers: []
	W1206 10:38:46.045814  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:46.045822  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:46.045833  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:46.114786  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:46.105174   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.106040   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.107658   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.108307   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.110017   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:46.105174   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.106040   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.107658   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.108307   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.110017   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:46.114796  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:46.114808  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:46.185171  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:46.185193  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:46.213442  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:46.213458  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:46.280354  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:46.280374  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:48.796511  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:48.807012  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:48.807073  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:48.832313  528268 cri.go:89] found id: ""
	I1206 10:38:48.832337  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.832344  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:48.832349  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:48.832420  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:48.857914  528268 cri.go:89] found id: ""
	I1206 10:38:48.857928  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.857935  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:48.857940  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:48.858000  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:48.887721  528268 cri.go:89] found id: ""
	I1206 10:38:48.887735  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.887743  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:48.887748  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:48.887808  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:48.912329  528268 cri.go:89] found id: ""
	I1206 10:38:48.912343  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.912351  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:48.912356  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:48.912416  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:48.942323  528268 cri.go:89] found id: ""
	I1206 10:38:48.942337  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.942344  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:48.942349  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:48.942408  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:48.971776  528268 cri.go:89] found id: ""
	I1206 10:38:48.971790  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.971798  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:48.971803  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:48.971861  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:48.997054  528268 cri.go:89] found id: ""
	I1206 10:38:48.997068  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.997076  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:48.997084  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:48.997095  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:49.071387  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:49.071413  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:49.099724  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:49.099743  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:49.165471  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:49.165492  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:49.180707  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:49.180755  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:49.246459  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:49.238180   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.239038   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.240759   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.241079   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.242605   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:49.238180   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.239038   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.240759   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.241079   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.242605   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:51.747477  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:51.757424  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:51.757483  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:51.785368  528268 cri.go:89] found id: ""
	I1206 10:38:51.785382  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.785390  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:51.785395  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:51.785452  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:51.814468  528268 cri.go:89] found id: ""
	I1206 10:38:51.814482  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.814489  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:51.814494  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:51.814553  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:51.839897  528268 cri.go:89] found id: ""
	I1206 10:38:51.839911  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.839918  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:51.839923  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:51.839980  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:51.865924  528268 cri.go:89] found id: ""
	I1206 10:38:51.865938  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.865951  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:51.865956  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:51.866011  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:51.891688  528268 cri.go:89] found id: ""
	I1206 10:38:51.891702  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.891709  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:51.891714  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:51.891772  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:51.917048  528268 cri.go:89] found id: ""
	I1206 10:38:51.917062  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.917070  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:51.917075  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:51.917132  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:51.942873  528268 cri.go:89] found id: ""
	I1206 10:38:51.942888  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.942895  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:51.942903  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:51.942914  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:52.011199  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:52.001318   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.002485   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.003254   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.005112   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.005720   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:52.001318   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.002485   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.003254   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.005112   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.005720   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:52.011209  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:52.011220  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:52.085464  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:52.085485  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:52.119213  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:52.119230  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:52.189731  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:52.189751  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:54.705436  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:54.717135  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:54.717196  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:54.755081  528268 cri.go:89] found id: ""
	I1206 10:38:54.755095  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.755105  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:54.755110  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:54.755209  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:54.780971  528268 cri.go:89] found id: ""
	I1206 10:38:54.780985  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.780993  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:54.780998  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:54.781060  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:54.806877  528268 cri.go:89] found id: ""
	I1206 10:38:54.806891  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.806898  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:54.806904  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:54.806967  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:54.832627  528268 cri.go:89] found id: ""
	I1206 10:38:54.832641  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.832649  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:54.832654  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:54.832711  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:54.857814  528268 cri.go:89] found id: ""
	I1206 10:38:54.857828  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.857836  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:54.857841  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:54.857897  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:54.883738  528268 cri.go:89] found id: ""
	I1206 10:38:54.883752  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.883759  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:54.883764  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:54.883821  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:54.909479  528268 cri.go:89] found id: ""
	I1206 10:38:54.909493  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.909500  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:54.909508  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:54.909519  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:54.975629  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:54.975651  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:54.991150  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:54.991166  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:55.064619  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:55.054168   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.054825   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.058121   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.058810   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.060748   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:55.054168   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.054825   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.058121   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.058810   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.060748   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:55.064628  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:55.064639  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:55.134387  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:55.134406  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:57.664428  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:57.675264  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:57.675328  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:57.709021  528268 cri.go:89] found id: ""
	I1206 10:38:57.709035  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.709043  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:57.709048  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:57.709116  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:57.744132  528268 cri.go:89] found id: ""
	I1206 10:38:57.744146  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.744153  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:57.744159  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:57.744226  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:57.778746  528268 cri.go:89] found id: ""
	I1206 10:38:57.778760  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.778767  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:57.778772  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:57.778829  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:57.805263  528268 cri.go:89] found id: ""
	I1206 10:38:57.805276  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.805284  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:57.805289  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:57.805348  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:57.831152  528268 cri.go:89] found id: ""
	I1206 10:38:57.831166  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.831173  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:57.831178  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:57.831240  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:57.857097  528268 cri.go:89] found id: ""
	I1206 10:38:57.857111  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.857119  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:57.857124  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:57.857189  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:57.882945  528268 cri.go:89] found id: ""
	I1206 10:38:57.882984  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.882992  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:57.883000  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:57.883011  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:57.915176  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:57.915193  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:57.981939  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:57.981958  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:57.997358  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:57.997373  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:58.070527  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:58.061092   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.061631   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.063614   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.064325   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.065286   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:58.061092   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.061631   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.063614   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.064325   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.065286   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:58.070538  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:58.070549  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:00.641789  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:00.651800  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:00.651859  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:00.679593  528268 cri.go:89] found id: ""
	I1206 10:39:00.679606  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.679613  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:00.679618  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:00.679673  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:00.712252  528268 cri.go:89] found id: ""
	I1206 10:39:00.712266  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.712273  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:00.712278  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:00.712337  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:00.746867  528268 cri.go:89] found id: ""
	I1206 10:39:00.746881  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.746888  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:00.746894  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:00.746954  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:00.779153  528268 cri.go:89] found id: ""
	I1206 10:39:00.779167  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.779174  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:00.779180  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:00.779241  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:00.805143  528268 cri.go:89] found id: ""
	I1206 10:39:00.805157  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.805164  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:00.805170  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:00.805227  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:00.831339  528268 cri.go:89] found id: ""
	I1206 10:39:00.831353  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.831361  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:00.831368  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:00.831430  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:00.857571  528268 cri.go:89] found id: ""
	I1206 10:39:00.857585  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.857593  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:00.857600  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:00.857611  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:00.925179  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:00.917222   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.917610   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.919217   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.919688   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.921308   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:00.917222   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.917610   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.919217   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.919688   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.921308   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:00.925189  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:00.925200  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:00.994191  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:00.994210  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:01.029067  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:01.029085  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:01.100689  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:01.100709  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:03.616374  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:03.626603  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:03.626714  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:03.651732  528268 cri.go:89] found id: ""
	I1206 10:39:03.651746  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.651753  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:03.651758  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:03.651818  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:03.679359  528268 cri.go:89] found id: ""
	I1206 10:39:03.679373  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.679380  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:03.679385  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:03.679442  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:03.714610  528268 cri.go:89] found id: ""
	I1206 10:39:03.714624  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.714631  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:03.714636  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:03.714693  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:03.745765  528268 cri.go:89] found id: ""
	I1206 10:39:03.745780  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.745787  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:03.745792  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:03.745849  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:03.771225  528268 cri.go:89] found id: ""
	I1206 10:39:03.771239  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.771247  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:03.771252  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:03.771316  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:03.796796  528268 cri.go:89] found id: ""
	I1206 10:39:03.796853  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.796861  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:03.796867  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:03.796925  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:03.822839  528268 cri.go:89] found id: ""
	I1206 10:39:03.822853  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.822861  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:03.822878  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:03.822888  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:03.858844  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:03.858860  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:03.925683  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:03.925703  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:03.941280  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:03.941297  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:04.009034  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:03.997692   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:03.998374   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.001181   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.001673   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.003993   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:03.997692   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:03.998374   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.001181   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.001673   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.003993   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:04.009044  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:04.009055  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:06.582354  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:06.592267  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:06.592340  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:06.617889  528268 cri.go:89] found id: ""
	I1206 10:39:06.617902  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.617909  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:06.617915  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:06.617979  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:06.643951  528268 cri.go:89] found id: ""
	I1206 10:39:06.643966  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.643973  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:06.643978  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:06.644035  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:06.669753  528268 cri.go:89] found id: ""
	I1206 10:39:06.669767  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.669774  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:06.669779  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:06.669839  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:06.701353  528268 cri.go:89] found id: ""
	I1206 10:39:06.701373  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.701380  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:06.701386  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:06.701445  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:06.751930  528268 cri.go:89] found id: ""
	I1206 10:39:06.751944  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.751952  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:06.751956  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:06.752019  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:06.778713  528268 cri.go:89] found id: ""
	I1206 10:39:06.778727  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.778734  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:06.778741  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:06.778802  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:06.804251  528268 cri.go:89] found id: ""
	I1206 10:39:06.804265  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.804273  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:06.804280  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:06.804290  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:06.871350  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:06.871368  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:06.885942  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:06.885960  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:06.959058  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:06.950158   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.951219   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.951835   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.953474   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.954070   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:06.950158   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.951219   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.951835   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.953474   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.954070   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:06.959068  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:06.959081  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:07.030114  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:07.030135  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:09.559397  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:09.569971  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:09.570039  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:09.595039  528268 cri.go:89] found id: ""
	I1206 10:39:09.595052  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.595059  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:09.595065  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:09.595152  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:09.621113  528268 cri.go:89] found id: ""
	I1206 10:39:09.621127  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.621135  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:09.621140  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:09.621203  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:09.651003  528268 cri.go:89] found id: ""
	I1206 10:39:09.651016  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.651024  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:09.651029  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:09.651087  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:09.677104  528268 cri.go:89] found id: ""
	I1206 10:39:09.677118  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.677125  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:09.677131  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:09.677187  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:09.713565  528268 cri.go:89] found id: ""
	I1206 10:39:09.713579  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.713587  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:09.713592  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:09.713653  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:09.741915  528268 cri.go:89] found id: ""
	I1206 10:39:09.741928  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.741935  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:09.741941  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:09.741997  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:09.774013  528268 cri.go:89] found id: ""
	I1206 10:39:09.774027  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.774035  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:09.774042  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:09.774054  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:09.840091  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:09.840113  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:09.855657  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:09.855675  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:09.919867  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:09.911210   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.911783   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.913473   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.914124   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.915891   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:09.911210   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.911783   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.913473   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.914124   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.915891   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:09.919877  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:09.919901  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:09.991592  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:09.991613  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:12.526559  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:12.537148  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:12.537208  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:12.570214  528268 cri.go:89] found id: ""
	I1206 10:39:12.570228  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.570235  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:12.570241  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:12.570299  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:12.595309  528268 cri.go:89] found id: ""
	I1206 10:39:12.595324  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.595331  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:12.595342  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:12.595401  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:12.620408  528268 cri.go:89] found id: ""
	I1206 10:39:12.620422  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.620429  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:12.620434  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:12.620495  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:12.645606  528268 cri.go:89] found id: ""
	I1206 10:39:12.645621  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.645628  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:12.645644  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:12.645700  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:12.672105  528268 cri.go:89] found id: ""
	I1206 10:39:12.672119  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.672126  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:12.672132  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:12.672191  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:12.699949  528268 cri.go:89] found id: ""
	I1206 10:39:12.699964  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.699971  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:12.699976  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:12.700038  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:12.730867  528268 cri.go:89] found id: ""
	I1206 10:39:12.730881  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.730888  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:12.730896  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:12.730907  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:12.760666  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:12.760682  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:12.827918  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:12.827939  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:12.845229  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:12.845250  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:12.913571  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:12.905225   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.906413   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.907377   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.908192   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.909739   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:12.905225   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.906413   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.907377   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.908192   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.909739   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:12.913582  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:12.913606  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:15.486285  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:15.496339  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:15.496397  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:15.522751  528268 cri.go:89] found id: ""
	I1206 10:39:15.522765  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.522773  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:15.522782  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:15.522842  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:15.548733  528268 cri.go:89] found id: ""
	I1206 10:39:15.548747  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.548760  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:15.548765  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:15.548823  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:15.574392  528268 cri.go:89] found id: ""
	I1206 10:39:15.574406  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.574413  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:15.574418  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:15.574475  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:15.600281  528268 cri.go:89] found id: ""
	I1206 10:39:15.600297  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.600311  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:15.600316  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:15.600376  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:15.626469  528268 cri.go:89] found id: ""
	I1206 10:39:15.626482  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.626490  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:15.626496  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:15.626561  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:15.652394  528268 cri.go:89] found id: ""
	I1206 10:39:15.652407  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.652414  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:15.652420  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:15.652477  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:15.679527  528268 cri.go:89] found id: ""
	I1206 10:39:15.679540  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.679553  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:15.679561  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:15.679571  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:15.764342  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:15.764363  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:15.798376  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:15.798394  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:15.868665  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:15.868685  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:15.883983  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:15.883999  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:15.952342  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:15.944348   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.945157   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.946732   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.947077   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.948583   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:15.944348   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.945157   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.946732   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.947077   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.948583   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:18.453493  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:18.463876  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:18.463935  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:18.490209  528268 cri.go:89] found id: ""
	I1206 10:39:18.490224  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.490231  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:18.490236  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:18.490294  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:18.516967  528268 cri.go:89] found id: ""
	I1206 10:39:18.516981  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.516988  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:18.516993  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:18.517054  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:18.546169  528268 cri.go:89] found id: ""
	I1206 10:39:18.546182  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.546189  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:18.546194  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:18.546253  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:18.571307  528268 cri.go:89] found id: ""
	I1206 10:39:18.571320  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.571327  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:18.571333  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:18.571391  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:18.596842  528268 cri.go:89] found id: ""
	I1206 10:39:18.596856  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.596863  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:18.596868  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:18.596924  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:18.622545  528268 cri.go:89] found id: ""
	I1206 10:39:18.622559  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.622566  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:18.622571  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:18.622628  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:18.647866  528268 cri.go:89] found id: ""
	I1206 10:39:18.647879  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.647886  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:18.647894  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:18.647904  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:18.722841  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:18.722867  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:18.738489  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:18.738506  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:18.804503  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:18.796653   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.797155   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.798686   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.799110   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.800626   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:18.796653   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.797155   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.798686   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.799110   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.800626   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:18.804514  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:18.804527  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:18.873502  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:18.873520  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:21.404064  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:21.414555  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:21.414615  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:21.439357  528268 cri.go:89] found id: ""
	I1206 10:39:21.439371  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.439378  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:21.439384  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:21.439444  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:21.464257  528268 cri.go:89] found id: ""
	I1206 10:39:21.464270  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.464278  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:21.464283  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:21.464342  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:21.489051  528268 cri.go:89] found id: ""
	I1206 10:39:21.489065  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.489072  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:21.489077  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:21.489133  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:21.514898  528268 cri.go:89] found id: ""
	I1206 10:39:21.514912  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.514919  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:21.514930  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:21.514988  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:21.540268  528268 cri.go:89] found id: ""
	I1206 10:39:21.540283  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.540290  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:21.540296  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:21.540361  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:21.564943  528268 cri.go:89] found id: ""
	I1206 10:39:21.564957  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.564965  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:21.564970  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:21.565031  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:21.590819  528268 cri.go:89] found id: ""
	I1206 10:39:21.590833  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.590840  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:21.590848  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:21.590858  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:21.656247  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:21.647267   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.648092   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.649642   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.650214   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.652120   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:21.647267   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.648092   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.649642   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.650214   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.652120   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:21.656258  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:21.656268  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:21.726649  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:21.726669  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:21.757883  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:21.757900  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:21.827592  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:21.827612  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:24.344952  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:24.355567  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:24.355629  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:24.381792  528268 cri.go:89] found id: ""
	I1206 10:39:24.381806  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.381814  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:24.381819  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:24.381880  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:24.406752  528268 cri.go:89] found id: ""
	I1206 10:39:24.406766  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.406773  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:24.406779  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:24.406837  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:24.435444  528268 cri.go:89] found id: ""
	I1206 10:39:24.435458  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.435466  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:24.435471  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:24.435537  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:24.460261  528268 cri.go:89] found id: ""
	I1206 10:39:24.460275  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.460282  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:24.460287  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:24.460344  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:24.485676  528268 cri.go:89] found id: ""
	I1206 10:39:24.485689  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.485697  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:24.485702  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:24.485758  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:24.515674  528268 cri.go:89] found id: ""
	I1206 10:39:24.515689  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.515696  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:24.515702  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:24.515759  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:24.540533  528268 cri.go:89] found id: ""
	I1206 10:39:24.540547  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.540555  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:24.540563  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:24.540573  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:24.607514  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:24.607536  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:24.622495  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:24.622512  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:24.688734  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:24.679787   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.680616   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.681733   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.682450   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.684164   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:24.679787   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.680616   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.681733   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.682450   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.684164   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:24.688745  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:24.688755  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:24.767851  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:24.767871  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:27.298384  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:27.308520  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:27.308577  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:27.337406  528268 cri.go:89] found id: ""
	I1206 10:39:27.337421  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.337429  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:27.337434  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:27.337492  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:27.363616  528268 cri.go:89] found id: ""
	I1206 10:39:27.363630  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.363637  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:27.363643  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:27.363700  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:27.387807  528268 cri.go:89] found id: ""
	I1206 10:39:27.387821  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.387828  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:27.387833  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:27.387892  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:27.417047  528268 cri.go:89] found id: ""
	I1206 10:39:27.417061  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.417068  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:27.417076  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:27.417135  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:27.443034  528268 cri.go:89] found id: ""
	I1206 10:39:27.443047  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.443055  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:27.443060  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:27.443156  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:27.469276  528268 cri.go:89] found id: ""
	I1206 10:39:27.469289  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.469297  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:27.469302  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:27.469361  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:27.494605  528268 cri.go:89] found id: ""
	I1206 10:39:27.494619  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.494626  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:27.494634  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:27.494681  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:27.522899  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:27.522916  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:27.593447  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:27.593467  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:27.608920  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:27.608937  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:27.673774  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:27.665376   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.666067   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.667656   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.668260   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.669814   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:27.665376   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.666067   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.667656   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.668260   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.669814   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:27.673784  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:27.673795  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:30.246836  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:30.257118  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:30.257181  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:30.285905  528268 cri.go:89] found id: ""
	I1206 10:39:30.285918  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.285926  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:30.285931  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:30.285991  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:30.312233  528268 cri.go:89] found id: ""
	I1206 10:39:30.312247  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.312254  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:30.312259  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:30.312320  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:30.342032  528268 cri.go:89] found id: ""
	I1206 10:39:30.342047  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.342061  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:30.342066  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:30.342127  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:30.371021  528268 cri.go:89] found id: ""
	I1206 10:39:30.371051  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.371059  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:30.371064  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:30.371145  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:30.397540  528268 cri.go:89] found id: ""
	I1206 10:39:30.397554  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.397561  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:30.397566  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:30.397625  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:30.424004  528268 cri.go:89] found id: ""
	I1206 10:39:30.424018  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.424026  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:30.424033  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:30.424090  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:30.450313  528268 cri.go:89] found id: ""
	I1206 10:39:30.450327  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.450335  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:30.450342  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:30.450352  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:30.516474  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:30.516493  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:30.532143  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:30.532160  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:30.595585  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:30.587952   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.588400   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.589883   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.590195   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.591620   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:30.587952   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.588400   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.589883   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.590195   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.591620   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:30.595595  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:30.595606  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:30.664167  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:30.664186  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:33.200924  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:33.211672  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:33.211735  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:33.237137  528268 cri.go:89] found id: ""
	I1206 10:39:33.237151  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.237159  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:33.237165  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:33.237265  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:33.263318  528268 cri.go:89] found id: ""
	I1206 10:39:33.263332  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.263339  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:33.263345  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:33.263403  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:33.292810  528268 cri.go:89] found id: ""
	I1206 10:39:33.292824  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.292832  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:33.292837  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:33.292902  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:33.322280  528268 cri.go:89] found id: ""
	I1206 10:39:33.322294  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.322302  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:33.322307  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:33.322371  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:33.347371  528268 cri.go:89] found id: ""
	I1206 10:39:33.347384  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.347391  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:33.347397  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:33.347454  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:33.373452  528268 cri.go:89] found id: ""
	I1206 10:39:33.373465  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.373473  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:33.373478  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:33.373536  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:33.398875  528268 cri.go:89] found id: ""
	I1206 10:39:33.398895  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.398902  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:33.398910  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:33.398921  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:33.465783  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:33.465803  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:33.480960  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:33.480977  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:33.548139  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:33.539389   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.540163   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.541972   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.542561   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.544286   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:33.539389   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.540163   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.541972   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.542561   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.544286   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:33.548148  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:33.548158  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:33.617390  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:33.617412  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:36.152703  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:36.162988  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:36.163052  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:36.188586  528268 cri.go:89] found id: ""
	I1206 10:39:36.188599  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.188607  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:36.188611  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:36.188670  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:36.213361  528268 cri.go:89] found id: ""
	I1206 10:39:36.213374  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.213383  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:36.213388  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:36.213445  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:36.239271  528268 cri.go:89] found id: ""
	I1206 10:39:36.239285  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.239292  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:36.239297  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:36.239357  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:36.265679  528268 cri.go:89] found id: ""
	I1206 10:39:36.265695  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.265702  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:36.265707  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:36.265766  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:36.295654  528268 cri.go:89] found id: ""
	I1206 10:39:36.295668  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.295675  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:36.295681  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:36.295739  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:36.323853  528268 cri.go:89] found id: ""
	I1206 10:39:36.323874  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.323881  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:36.323887  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:36.323950  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:36.348624  528268 cri.go:89] found id: ""
	I1206 10:39:36.348639  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.348646  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:36.348654  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:36.348665  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:36.363245  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:36.363261  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:36.427550  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:36.419105   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.419825   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.421548   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.422073   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.423577   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:36.419105   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.419825   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.421548   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.422073   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.423577   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:36.427562  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:36.427573  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:36.495925  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:36.495943  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:36.524935  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:36.524952  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:39.092735  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:39.102812  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:39.102870  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:39.129292  528268 cri.go:89] found id: ""
	I1206 10:39:39.129306  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.129313  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:39.129318  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:39.129374  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:39.158470  528268 cri.go:89] found id: ""
	I1206 10:39:39.158484  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.158491  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:39.158496  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:39.158555  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:39.184281  528268 cri.go:89] found id: ""
	I1206 10:39:39.184295  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.184303  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:39.184308  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:39.184371  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:39.213800  528268 cri.go:89] found id: ""
	I1206 10:39:39.213813  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.213820  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:39.213825  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:39.213879  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:39.239313  528268 cri.go:89] found id: ""
	I1206 10:39:39.239327  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.239334  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:39.239339  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:39.239399  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:39.266416  528268 cri.go:89] found id: ""
	I1206 10:39:39.266429  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.266436  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:39.266442  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:39.266497  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:39.291512  528268 cri.go:89] found id: ""
	I1206 10:39:39.291526  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.291533  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:39.291541  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:39.291552  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:39.357396  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:39.357414  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:39.372532  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:39.372549  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:39.435924  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:39.427398   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.428323   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.429997   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.430495   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.432094   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:39.427398   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.428323   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.429997   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.430495   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.432094   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:39.435935  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:39.435946  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:39.504162  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:39.504182  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:42.034738  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:42.045722  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:42.045786  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:42.075972  528268 cri.go:89] found id: ""
	I1206 10:39:42.075988  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.075998  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:42.076004  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:42.076071  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:42.111989  528268 cri.go:89] found id: ""
	I1206 10:39:42.112018  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.112042  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:42.112048  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:42.112124  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:42.147538  528268 cri.go:89] found id: ""
	I1206 10:39:42.147562  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.147571  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:42.147577  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:42.147654  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:42.177982  528268 cri.go:89] found id: ""
	I1206 10:39:42.177999  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.178009  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:42.178016  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:42.178090  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:42.209844  528268 cri.go:89] found id: ""
	I1206 10:39:42.209860  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.209868  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:42.209874  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:42.209966  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:42.266057  528268 cri.go:89] found id: ""
	I1206 10:39:42.266071  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.266079  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:42.266085  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:42.266153  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:42.298140  528268 cri.go:89] found id: ""
	I1206 10:39:42.298154  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.298162  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:42.298184  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:42.298197  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:42.330034  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:42.330051  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:42.396938  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:42.396958  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:42.412056  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:42.412077  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:42.481304  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:42.470939   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.471731   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.473286   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.475758   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.476402   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:42.470939   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.471731   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.473286   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.475758   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.476402   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:42.481314  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:42.481326  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:45.054765  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:45.080943  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:45.081023  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:45.141872  528268 cri.go:89] found id: ""
	I1206 10:39:45.141889  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.141898  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:45.141904  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:45.141970  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:45.187818  528268 cri.go:89] found id: ""
	I1206 10:39:45.187838  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.187846  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:45.187854  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:45.187928  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:45.231785  528268 cri.go:89] found id: ""
	I1206 10:39:45.231815  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.231846  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:45.231853  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:45.232001  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:45.271976  528268 cri.go:89] found id: ""
	I1206 10:39:45.272000  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.272007  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:45.272020  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:45.272144  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:45.309755  528268 cri.go:89] found id: ""
	I1206 10:39:45.309770  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.309778  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:45.309784  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:45.309859  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:45.337077  528268 cri.go:89] found id: ""
	I1206 10:39:45.337091  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.337098  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:45.337104  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:45.337161  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:45.363255  528268 cri.go:89] found id: ""
	I1206 10:39:45.363269  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.363277  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:45.363285  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:45.363295  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:45.430326  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:45.430345  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:45.445222  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:45.445239  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:45.514305  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:45.503694   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.504527   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.507399   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.508008   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.509816   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:45.503694   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.504527   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.507399   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.508008   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.509816   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:45.514315  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:45.514351  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:45.586673  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:45.586702  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:48.117880  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:48.128191  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:48.128261  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:48.153898  528268 cri.go:89] found id: ""
	I1206 10:39:48.153912  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.153919  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:48.153924  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:48.153986  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:48.179947  528268 cri.go:89] found id: ""
	I1206 10:39:48.179960  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.179968  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:48.179973  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:48.180032  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:48.206970  528268 cri.go:89] found id: ""
	I1206 10:39:48.206984  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.206992  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:48.206997  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:48.207056  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:48.232490  528268 cri.go:89] found id: ""
	I1206 10:39:48.232504  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.232511  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:48.232516  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:48.232574  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:48.261888  528268 cri.go:89] found id: ""
	I1206 10:39:48.261902  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.261909  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:48.261915  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:48.261970  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:48.287239  528268 cri.go:89] found id: ""
	I1206 10:39:48.287259  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.287266  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:48.287271  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:48.287327  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:48.312701  528268 cri.go:89] found id: ""
	I1206 10:39:48.312716  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.312723  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:48.312730  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:48.312741  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:48.379854  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:48.379873  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:48.395027  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:48.395043  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:48.467966  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:48.459014   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.459732   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.460649   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.462199   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.462576   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:48.459014   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.459732   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.460649   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.462199   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.462576   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:48.467977  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:48.467999  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:48.537326  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:48.537347  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:51.077353  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:51.088357  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:51.088422  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:51.113964  528268 cri.go:89] found id: ""
	I1206 10:39:51.113978  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.113986  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:51.113991  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:51.114048  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:51.141966  528268 cri.go:89] found id: ""
	I1206 10:39:51.141981  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.141989  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:51.141994  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:51.142065  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:51.170585  528268 cri.go:89] found id: ""
	I1206 10:39:51.170599  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.170607  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:51.170612  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:51.170670  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:51.196958  528268 cri.go:89] found id: ""
	I1206 10:39:51.196972  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.196980  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:51.196985  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:51.197045  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:51.222240  528268 cri.go:89] found id: ""
	I1206 10:39:51.222255  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.222262  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:51.222267  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:51.222328  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:51.248023  528268 cri.go:89] found id: ""
	I1206 10:39:51.248038  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.248045  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:51.248051  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:51.248110  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:51.276094  528268 cri.go:89] found id: ""
	I1206 10:39:51.276108  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.276115  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:51.276122  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:51.276132  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:51.342420  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:51.342443  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:51.357018  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:51.357034  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:51.423986  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:51.415814   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.416564   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.418096   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.418402   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.419900   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:51.415814   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.416564   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.418096   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.418402   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.419900   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:51.423996  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:51.424007  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:51.493620  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:51.493640  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:54.023829  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:54.034889  528268 kubeadm.go:602] duration metric: took 4m2.326619845s to restartPrimaryControlPlane
	W1206 10:39:54.034955  528268 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 10:39:54.035078  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:39:54.453084  528268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:39:54.466906  528268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:39:54.474624  528268 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:39:54.474678  528268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:39:54.482552  528268 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:39:54.482562  528268 kubeadm.go:158] found existing configuration files:
	
	I1206 10:39:54.482612  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:39:54.490238  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:39:54.490301  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:39:54.497760  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:39:54.505776  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:39:54.505840  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:39:54.513397  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:39:54.521456  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:39:54.521517  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:39:54.529274  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:39:54.537105  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:39:54.537161  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:39:54.544719  528268 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:39:54.584997  528268 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:39:54.585045  528268 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:39:54.652750  528268 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:39:54.652815  528268 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:39:54.652850  528268 kubeadm.go:319] OS: Linux
	I1206 10:39:54.652893  528268 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:39:54.652940  528268 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:39:54.652986  528268 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:39:54.653033  528268 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:39:54.653079  528268 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:39:54.653126  528268 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:39:54.653171  528268 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:39:54.653217  528268 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:39:54.653262  528268 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:39:54.728791  528268 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:39:54.728901  528268 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:39:54.729018  528268 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:39:54.737647  528268 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:39:54.741159  528268 out.go:252]   - Generating certificates and keys ...
	I1206 10:39:54.741265  528268 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:39:54.741337  528268 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:39:54.741433  528268 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:39:54.741505  528268 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:39:54.741585  528268 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:39:54.741651  528268 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:39:54.741743  528268 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:39:54.741813  528268 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:39:54.741895  528268 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:39:54.741991  528268 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:39:54.742045  528268 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:39:54.742113  528268 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:39:55.375743  528268 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:39:55.444664  528268 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:39:55.561708  528268 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:39:55.802678  528268 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:39:55.992428  528268 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:39:55.993134  528268 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:39:55.995941  528268 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:39:55.999335  528268 out.go:252]   - Booting up control plane ...
	I1206 10:39:55.999434  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:39:55.999507  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:39:55.999569  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:39:56.016567  528268 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:39:56.016688  528268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:39:56.025029  528268 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:39:56.025345  528268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:39:56.025411  528268 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:39:56.167783  528268 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:39:56.167896  528268 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:43:56.165890  528268 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000163749s
	I1206 10:43:56.165916  528268 kubeadm.go:319] 
	I1206 10:43:56.165973  528268 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:43:56.166007  528268 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:43:56.166124  528268 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:43:56.166130  528268 kubeadm.go:319] 
	I1206 10:43:56.166237  528268 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:43:56.166298  528268 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:43:56.166345  528268 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:43:56.166349  528268 kubeadm.go:319] 
	I1206 10:43:56.171451  528268 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:43:56.171899  528268 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:43:56.172014  528268 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:43:56.172288  528268 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1206 10:43:56.172293  528268 kubeadm.go:319] 
	I1206 10:43:56.172374  528268 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:43:56.172501  528268 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000163749s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:43:56.172597  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:43:56.619462  528268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:43:56.633229  528268 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:43:56.633287  528268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:43:56.641609  528268 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:43:56.641619  528268 kubeadm.go:158] found existing configuration files:
	
	I1206 10:43:56.641669  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:43:56.649494  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:43:56.649548  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:43:56.657009  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:43:56.665153  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:43:56.665204  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:43:56.672965  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:43:56.681003  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:43:56.681063  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:43:56.688721  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:43:56.696901  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:43:56.696963  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:43:56.704620  528268 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:43:56.745749  528268 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:43:56.745826  528268 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:43:56.814552  528268 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:43:56.814625  528268 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:43:56.814668  528268 kubeadm.go:319] OS: Linux
	I1206 10:43:56.814710  528268 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:43:56.814764  528268 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:43:56.814817  528268 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:43:56.814861  528268 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:43:56.814913  528268 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:43:56.814977  528268 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:43:56.815030  528268 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:43:56.815078  528268 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:43:56.815150  528268 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:43:56.882919  528268 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:43:56.883028  528268 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:43:56.883177  528268 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:43:56.891776  528268 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:43:56.897133  528268 out.go:252]   - Generating certificates and keys ...
	I1206 10:43:56.897243  528268 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:43:56.897331  528268 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:43:56.897418  528268 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:43:56.897483  528268 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:43:56.897556  528268 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:43:56.897613  528268 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:43:56.897679  528268 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:43:56.897743  528268 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:43:56.897822  528268 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:43:56.897898  528268 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:43:56.897938  528268 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:43:56.897997  528268 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:43:57.103756  528268 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:43:57.598666  528268 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:43:58.161834  528268 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:43:58.402161  528268 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:43:58.630471  528268 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:43:58.631113  528268 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:43:58.634023  528268 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:43:58.637198  528268 out.go:252]   - Booting up control plane ...
	I1206 10:43:58.637294  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:43:58.637640  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:43:58.639086  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:43:58.654264  528268 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:43:58.654366  528268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:43:58.662722  528268 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:43:58.663439  528268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:43:58.663774  528268 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:43:58.799365  528268 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:43:58.799473  528268 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:47:58.799403  528268 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000249913s
	I1206 10:47:58.799433  528268 kubeadm.go:319] 
	I1206 10:47:58.799491  528268 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:47:58.799521  528268 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:47:58.799619  528268 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:47:58.799623  528268 kubeadm.go:319] 
	I1206 10:47:58.799720  528268 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:47:58.799749  528268 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:47:58.799777  528268 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:47:58.799780  528268 kubeadm.go:319] 
	I1206 10:47:58.803822  528268 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:47:58.804249  528268 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:47:58.804357  528268 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:47:58.804590  528268 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:47:58.804595  528268 kubeadm.go:319] 
	I1206 10:47:58.804663  528268 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:47:58.804715  528268 kubeadm.go:403] duration metric: took 12m7.139257328s to StartCluster
	I1206 10:47:58.804746  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:47:58.804808  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:47:58.833842  528268 cri.go:89] found id: ""
	I1206 10:47:58.833855  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.833863  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:47:58.833869  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:47:58.833925  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:47:58.859642  528268 cri.go:89] found id: ""
	I1206 10:47:58.859656  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.859663  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:47:58.859668  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:47:58.859731  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:47:58.888835  528268 cri.go:89] found id: ""
	I1206 10:47:58.888850  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.888857  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:47:58.888863  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:47:58.888920  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:47:58.913692  528268 cri.go:89] found id: ""
	I1206 10:47:58.913706  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.913713  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:47:58.913718  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:47:58.913775  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:47:58.941639  528268 cri.go:89] found id: ""
	I1206 10:47:58.941653  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.941660  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:47:58.941671  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:47:58.941728  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:47:58.968219  528268 cri.go:89] found id: ""
	I1206 10:47:58.968240  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.968249  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:47:58.968254  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:47:58.968312  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:47:58.993376  528268 cri.go:89] found id: ""
	I1206 10:47:58.993390  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.993397  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:47:58.993405  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:47:58.993415  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:47:59.059491  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:47:59.059510  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:47:59.075692  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:47:59.075708  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:47:59.140902  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:47:59.133228   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.133791   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.135323   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.135733   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.137154   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:47:59.133228   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.133791   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.135323   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.135733   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.137154   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:47:59.140911  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:47:59.140922  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:47:59.218521  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:47:59.218539  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:47:59.255468  528268 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000249913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:47:59.255514  528268 out.go:285] * 
	W1206 10:47:59.255766  528268 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000249913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:47:59.255841  528268 out.go:285] * 
	W1206 10:47:59.258456  528268 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:47:59.265427  528268 out.go:203] 
	W1206 10:47:59.268413  528268 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000249913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:47:59.268473  528268 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:47:59.268491  528268 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:47:59.271584  528268 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:35:50 functional-123579 crio[9949]: time="2025-12-06T10:35:50.040211726Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 06 10:35:50 functional-123579 crio[9949]: time="2025-12-06T10:35:50.040248033Z" level=info msg="Starting seccomp notifier watcher"
	Dec 06 10:35:50 functional-123579 crio[9949]: time="2025-12-06T10:35:50.040298977Z" level=info msg="Create NRI interface"
	Dec 06 10:35:50 functional-123579 crio[9949]: time="2025-12-06T10:35:50.040397822Z" level=info msg="built-in NRI default validator is disabled"
	Dec 06 10:35:50 functional-123579 crio[9949]: time="2025-12-06T10:35:50.04040656Z" level=info msg="runtime interface created"
	Dec 06 10:35:50 functional-123579 crio[9949]: time="2025-12-06T10:35:50.040418097Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 06 10:35:50 functional-123579 crio[9949]: time="2025-12-06T10:35:50.040424414Z" level=info msg="runtime interface starting up..."
	Dec 06 10:35:50 functional-123579 crio[9949]: time="2025-12-06T10:35:50.040430519Z" level=info msg="starting plugins..."
	Dec 06 10:35:50 functional-123579 crio[9949]: time="2025-12-06T10:35:50.040443565Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:35:50 functional-123579 crio[9949]: time="2025-12-06T10:35:50.040509278Z" level=info msg="No systemd watchdog enabled"
	Dec 06 10:35:50 functional-123579 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.732761675Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=c00b0212-e336-4d22-92e1-7d2bc5879a6e name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.733702159Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=9f684ee3-1cff-44ee-b48c-175c742cbd8a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.734357315Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=b1ddac76-5aa4-4140-b7f7-c9eed400c171 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.734837772Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=7e125323-ff3c-4e31-b0b9-3d9689de3e58 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.735631552Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=67dc2959-1f35-4122-97f6-07949ee5c60d name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.7361477Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=78397495-3170-4295-8073-cc8bd3750cff name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.736754759Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=b77669c2-3fed-4601-ace3-1a76e50882f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.886838849Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=e2aa5af4-3e0c-4a29-a9b0-9e59e8da3ea3 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.888149098Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=2232845f-2ab4-48d6-ac34-944fdebda910 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.888749905Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=c67da188-42dd-470b-ae77-cf546f5b22af name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.889342319Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=7b189f38-b046-468f-93d2-aafc2f683ea0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.889870274Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=cce0b971-d053-408a-aced-c9bdb56d4198 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.890356696Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=2133806a-9696-4cef-a9b9-9f8ae49bcb1a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.890769463Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=4197f4de-a4d5-47d7-aee8-909523db8ff4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:48:03.019057   21359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:48:03.019897   21359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:48:03.021798   21359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:48:03.022673   21359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:48:03.024452   21359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:20] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000983] FS-Cache: O-cookie d=000000005fa08aa9{9P.session} n=00000000effdd306
	[  +0.001108] FS-Cache: O-key=[10] '34323935383339353739'
	[  +0.000774] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001064] FS-Cache: N-cookie d=000000005fa08aa9{9P.session} n=00000000d1a54e80
	[  +0.001158] FS-Cache: N-key=[10] '34323935383339353739'
	[Dec 6 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.091742] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:17] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:18] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:48:03 up  3:30,  0 user,  load average: 0.42, 0.21, 0.46
	Linux functional-123579 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:48:00 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:48:01 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2131.
	Dec 06 10:48:01 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:48:01 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:48:01 functional-123579 kubelet[21237]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:48:01 functional-123579 kubelet[21237]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:48:01 functional-123579 kubelet[21237]: E1206 10:48:01.500960   21237 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:48:01 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:48:01 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:48:02 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2132.
	Dec 06 10:48:02 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:48:02 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:48:02 functional-123579 kubelet[21272]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:48:02 functional-123579 kubelet[21272]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:48:02 functional-123579 kubelet[21272]: E1206 10:48:02.247857   21272 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:48:02 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:48:02 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:48:02 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2133.
	Dec 06 10:48:02 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:48:02 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:48:02 functional-123579 kubelet[21349]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:48:02 functional-123579 kubelet[21349]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:48:02 functional-123579 kubelet[21349]: E1206 10:48:02.993617   21349 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:48:02 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:48:02 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579: exit status 2 (352.259686ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-123579" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-123579 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-123579 apply -f testdata/invalidsvc.yaml: exit status 1 (54.083489ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-123579 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-123579 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-123579 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-123579 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-123579 --alsologtostderr -v=1] stderr:
I1206 10:50:16.412901  547159 out.go:360] Setting OutFile to fd 1 ...
I1206 10:50:16.413022  547159 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:50:16.413033  547159 out.go:374] Setting ErrFile to fd 2...
I1206 10:50:16.413038  547159 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:50:16.413296  547159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
I1206 10:50:16.413565  547159 mustload.go:66] Loading cluster: functional-123579
I1206 10:50:16.414008  547159 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 10:50:16.414470  547159 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
I1206 10:50:16.440798  547159 host.go:66] Checking if "functional-123579" exists ...
I1206 10:50:16.441316  547159 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1206 10:50:16.519372  547159 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:50:16.508435459 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1206 10:50:16.519553  547159 api_server.go:166] Checking apiserver status ...
I1206 10:50:16.519633  547159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1206 10:50:16.519699  547159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
I1206 10:50:16.537909  547159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
W1206 10:50:16.644601  547159 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1206 10:50:16.647788  547159 out.go:179] * The control-plane node functional-123579 apiserver is not running: (state=Stopped)
I1206 10:50:16.650541  547159 out.go:179]   To start a cluster, run: "minikube start -p functional-123579"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-123579
helpers_test.go:243: (dbg) docker inspect functional-123579:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	        "Created": "2025-12-06T10:21:05.490589445Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 516908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:21:05.573219423Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hostname",
	        "HostsPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hosts",
	        "LogPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721-json.log",
	        "Name": "/functional-123579",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-123579:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-123579",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	                "LowerDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f-init/diff:/var/lib/docker/overlay2/cc06c0f1f442a7275dc247974ca9074508813cfb842de89bc5bb1dae1e824222/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-123579",
	                "Source": "/var/lib/docker/volumes/functional-123579/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-123579",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-123579",
	                "name.minikube.sigs.k8s.io": "functional-123579",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10921d51d4ec866d78853297249318b04ef864639c8e07349985c5733ba03a26",
	            "SandboxKey": "/var/run/docker/netns/10921d51d4ec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-123579": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:5b:29:c4:a4:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa75a7cb7ddfb7086d66f629904d681a84e2c9da78725396c4dc859cfc5aa536",
	                    "EndpointID": "eff9632b5a6c335169f4a61b3c9f1727c30b30183ac61ac9730ddb7b0d19cf24",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-123579",
	                        "86e8d3865f80"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579: exit status 2 (320.492755ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons    │ functional-123579 addons list -o json                                                                                                               │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ ssh       │ functional-123579 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ mount     │ -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3981227179/001:/mount-9p --alsologtostderr -v=1              │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ ssh       │ functional-123579 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ ssh       │ functional-123579 ssh -- ls -la /mount-9p                                                                                                           │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ ssh       │ functional-123579 ssh cat /mount-9p/test-1765018209109782394                                                                                        │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ ssh       │ functional-123579 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ ssh       │ functional-123579 ssh sudo umount -f /mount-9p                                                                                                      │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ mount     │ -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1745843341/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ ssh       │ functional-123579 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ ssh       │ functional-123579 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ ssh       │ functional-123579 ssh -- ls -la /mount-9p                                                                                                           │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ ssh       │ functional-123579 ssh sudo umount -f /mount-9p                                                                                                      │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ mount     │ -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1916336964/001:/mount1 --alsologtostderr -v=1                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ mount     │ -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1916336964/001:/mount2 --alsologtostderr -v=1                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ mount     │ -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1916336964/001:/mount3 --alsologtostderr -v=1                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ ssh       │ functional-123579 ssh findmnt -T /mount1                                                                                                            │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ ssh       │ functional-123579 ssh findmnt -T /mount1                                                                                                            │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ ssh       │ functional-123579 ssh findmnt -T /mount2                                                                                                            │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ ssh       │ functional-123579 ssh findmnt -T /mount3                                                                                                            │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ mount     │ -p functional-123579 --kill=true                                                                                                                    │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ start     │ -p functional-123579 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ start     │ -p functional-123579 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ start     │ -p functional-123579 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-123579 --alsologtostderr -v=1                                                                                      │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:50:16
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:50:16.228088  547112 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:50:16.228226  547112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:50:16.228235  547112 out.go:374] Setting ErrFile to fd 2...
	I1206 10:50:16.228242  547112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:50:16.228610  547112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:50:16.229021  547112 out.go:368] Setting JSON to false
	I1206 10:50:16.229905  547112 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":12768,"bootTime":1765005449,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:50:16.229980  547112 start.go:143] virtualization:  
	I1206 10:50:16.233324  547112 out.go:179] * [functional-123579] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1206 10:50:16.236330  547112 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:50:16.236404  547112 notify.go:221] Checking for updates...
	I1206 10:50:16.242247  547112 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:50:16.245134  547112 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:50:16.248006  547112 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:50:16.250882  547112 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:50:16.253750  547112 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:50:16.256997  547112 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:50:16.257560  547112 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:50:16.278739  547112 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:50:16.278856  547112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:50:16.342153  547112 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:50:16.332904034 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:50:16.342267  547112 docker.go:319] overlay module found
	I1206 10:50:16.345362  547112 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 10:50:16.348240  547112 start.go:309] selected driver: docker
	I1206 10:50:16.348265  547112 start.go:927] validating driver "docker" against &{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:50:16.348367  547112 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:50:16.351933  547112 out.go:203] 
	W1206 10:50:16.354791  547112 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 10:50:16.357639  547112 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.886838849Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=e2aa5af4-3e0c-4a29-a9b0-9e59e8da3ea3 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.888149098Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=2232845f-2ab4-48d6-ac34-944fdebda910 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.888749905Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=c67da188-42dd-470b-ae77-cf546f5b22af name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.889342319Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=7b189f38-b046-468f-93d2-aafc2f683ea0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.889870274Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=cce0b971-d053-408a-aced-c9bdb56d4198 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.890356696Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=2133806a-9696-4cef-a9b9-9f8ae49bcb1a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.890769463Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=4197f4de-a4d5-47d7-aee8-909523db8ff4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.510413066Z" level=info msg="Checking image status: kicbase/echo-server:functional-123579" id=03972bc3-b343-408f-b3f2-79f8c749bdd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.510587528Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.510631539Z" level=info msg="Image kicbase/echo-server:functional-123579 not found" id=03972bc3-b343-408f-b3f2-79f8c749bdd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.510692789Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-123579 found" id=03972bc3-b343-408f-b3f2-79f8c749bdd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.542613043Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-123579" id=58dbc605-d105-4be4-b25a-21c2b48f56f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.54278168Z" level=info msg="Image docker.io/kicbase/echo-server:functional-123579 not found" id=58dbc605-d105-4be4-b25a-21c2b48f56f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.542832714Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-123579 found" id=58dbc605-d105-4be4-b25a-21c2b48f56f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.568965528Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-123579" id=0d06a5de-c1f5-4ecd-8470-3e3f2af12cd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.569093041Z" level=info msg="Image localhost/kicbase/echo-server:functional-123579 not found" id=0d06a5de-c1f5-4ecd-8470-3e3f2af12cd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.569130307Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-123579 found" id=0d06a5de-c1f5-4ecd-8470-3e3f2af12cd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.415971983Z" level=info msg="Checking image status: kicbase/echo-server:functional-123579" id=d02ceb5e-e1d4-444e-b5cf-afd7146cf8a4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.416234295Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.416285124Z" level=info msg="Image kicbase/echo-server:functional-123579 not found" id=d02ceb5e-e1d4-444e-b5cf-afd7146cf8a4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.416360913Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-123579 found" id=d02ceb5e-e1d4-444e-b5cf-afd7146cf8a4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.443629234Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-123579" id=cdf51062-f60d-426d-8465-769b2314eeb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.443787499Z" level=info msg="Image docker.io/kicbase/echo-server:functional-123579 not found" id=cdf51062-f60d-426d-8465-769b2314eeb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.443828999Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-123579 found" id=cdf51062-f60d-426d-8465-769b2314eeb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.48107794Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-123579" id=b88f3676-3120-4861-8534-602a63bfd49e name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:50:17.726777   24009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:50:17.727417   24009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:50:17.728980   24009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:50:17.729495   24009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:50:17.731032   24009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:20] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000983] FS-Cache: O-cookie d=000000005fa08aa9{9P.session} n=00000000effdd306
	[  +0.001108] FS-Cache: O-key=[10] '34323935383339353739'
	[  +0.000774] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001064] FS-Cache: N-cookie d=000000005fa08aa9{9P.session} n=00000000d1a54e80
	[  +0.001158] FS-Cache: N-key=[10] '34323935383339353739'
	[Dec 6 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.091742] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:17] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:18] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:50:17 up  3:32,  0 user,  load average: 0.83, 0.43, 0.51
	Linux functional-123579 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:50:14 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:50:15 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2310.
	Dec 06 10:50:15 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:50:15 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:50:15 functional-123579 kubelet[23890]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:50:15 functional-123579 kubelet[23890]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:50:15 functional-123579 kubelet[23890]: E1206 10:50:15.752130   23890 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:50:15 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:50:15 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:50:16 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2311.
	Dec 06 10:50:16 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:50:16 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:50:16 functional-123579 kubelet[23895]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:50:16 functional-123579 kubelet[23895]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:50:16 functional-123579 kubelet[23895]: E1206 10:50:16.491326   23895 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:50:16 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:50:16 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:50:17 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2312.
	Dec 06 10:50:17 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:50:17 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:50:17 functional-123579 kubelet[23924]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:50:17 functional-123579 kubelet[23924]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:50:17 functional-123579 kubelet[23924]: E1206 10:50:17.245671   23924 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:50:17 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:50:17 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579: exit status 2 (350.390512ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-123579" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-123579 status: exit status 2 (313.629722ms)

                                                
                                                
-- stdout --
	functional-123579
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-123579 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-123579 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (324.334674ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-123579 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-123579 status -o json: exit status 2 (311.636724ms)

                                                
                                                
-- stdout --
	{"Name":"functional-123579","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-123579 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-123579
helpers_test.go:243: (dbg) docker inspect functional-123579:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	        "Created": "2025-12-06T10:21:05.490589445Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 516908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:21:05.573219423Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hostname",
	        "HostsPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hosts",
	        "LogPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721-json.log",
	        "Name": "/functional-123579",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-123579:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-123579",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	                "LowerDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f-init/diff:/var/lib/docker/overlay2/cc06c0f1f442a7275dc247974ca9074508813cfb842de89bc5bb1dae1e824222/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-123579",
	                "Source": "/var/lib/docker/volumes/functional-123579/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-123579",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-123579",
	                "name.minikube.sigs.k8s.io": "functional-123579",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10921d51d4ec866d78853297249318b04ef864639c8e07349985c5733ba03a26",
	            "SandboxKey": "/var/run/docker/netns/10921d51d4ec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-123579": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:5b:29:c4:a4:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa75a7cb7ddfb7086d66f629904d681a84e2c9da78725396c4dc859cfc5aa536",
	                    "EndpointID": "eff9632b5a6c335169f4a61b3c9f1727c30b30183ac61ac9730ddb7b0d19cf24",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-123579",
	                        "86e8d3865f80"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579: exit status 2 (335.914609ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-123579 image ls                                                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ image   │ functional-123579 image ls                                                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ ssh     │ functional-123579 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ image   │ functional-123579 image save kicbase/echo-server:functional-123579 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ ssh     │ functional-123579 ssh sudo cat /etc/ssl/certs/4880682.pem                                                                                                 │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ image   │ functional-123579 image rm kicbase/echo-server:functional-123579 --alsologtostderr                                                                        │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ ssh     │ functional-123579 ssh sudo cat /usr/share/ca-certificates/4880682.pem                                                                                     │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ image   │ functional-123579 image ls                                                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ ssh     │ functional-123579 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ image   │ functional-123579 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ ssh     │ functional-123579 ssh sudo cat /etc/test/nested/copy/488068/hosts                                                                                         │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ image   │ functional-123579 image ls                                                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ service │ functional-123579 service list                                                                                                                            │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ image   │ functional-123579 image save --daemon kicbase/echo-server:functional-123579 --alsologtostderr                                                             │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ service │ functional-123579 service list -o json                                                                                                                    │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ service │ functional-123579 service --namespace=default --https --url hello-node                                                                                    │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ ssh     │ functional-123579 ssh echo hello                                                                                                                          │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ service │ functional-123579 service hello-node --url --format={{.IP}}                                                                                               │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ ssh     │ functional-123579 ssh cat /etc/hostname                                                                                                                   │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ service │ functional-123579 service hello-node --url                                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ tunnel  │ functional-123579 tunnel --alsologtostderr                                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ tunnel  │ functional-123579 tunnel --alsologtostderr                                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ tunnel  │ functional-123579 tunnel --alsologtostderr                                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ addons  │ functional-123579 addons list                                                                                                                             │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ addons  │ functional-123579 addons list -o json                                                                                                                     │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:35:46
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:35:46.955658  528268 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:35:46.955828  528268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:35:46.955833  528268 out.go:374] Setting ErrFile to fd 2...
	I1206 10:35:46.955837  528268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:35:46.956177  528268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:35:46.956655  528268 out.go:368] Setting JSON to false
	I1206 10:35:46.957664  528268 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11898,"bootTime":1765005449,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:35:46.957734  528268 start.go:143] virtualization:  
	I1206 10:35:46.961283  528268 out.go:179] * [functional-123579] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:35:46.964510  528268 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:35:46.964613  528268 notify.go:221] Checking for updates...
	I1206 10:35:46.968278  528268 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:35:46.971356  528268 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:35:46.974199  528268 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:35:46.977104  528268 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:35:46.980765  528268 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:35:46.984213  528268 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:35:46.984322  528268 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:35:47.012645  528268 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:35:47.012749  528268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:35:47.074577  528268 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:35:47.064697556 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:35:47.074671  528268 docker.go:319] overlay module found
	I1206 10:35:47.077640  528268 out.go:179] * Using the docker driver based on existing profile
	I1206 10:35:47.080521  528268 start.go:309] selected driver: docker
	I1206 10:35:47.080533  528268 start.go:927] validating driver "docker" against &{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:35:47.080637  528268 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:35:47.080758  528268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:35:47.138440  528268 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:35:47.128848609 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:35:47.138821  528268 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:35:47.138844  528268 cni.go:84] Creating CNI manager for ""
	I1206 10:35:47.138899  528268 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:35:47.138936  528268 start.go:353] cluster config:
	{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:35:47.144166  528268 out.go:179] * Starting "functional-123579" primary control-plane node in "functional-123579" cluster
	I1206 10:35:47.147068  528268 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:35:47.149949  528268 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:35:47.152780  528268 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:35:47.152816  528268 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:35:47.152824  528268 cache.go:65] Caching tarball of preloaded images
	I1206 10:35:47.152870  528268 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:35:47.152921  528268 preload.go:238] Found /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:35:47.152931  528268 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:35:47.153043  528268 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/config.json ...
	I1206 10:35:47.172511  528268 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:35:47.172523  528268 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:35:47.172545  528268 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:35:47.172580  528268 start.go:360] acquireMachinesLock for functional-123579: {Name:mk35a9adf20f50a3c49b774a4ee092917f16cc66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:35:47.172652  528268 start.go:364] duration metric: took 54.497µs to acquireMachinesLock for "functional-123579"
	I1206 10:35:47.172672  528268 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:35:47.172676  528268 fix.go:54] fixHost starting: 
	I1206 10:35:47.172937  528268 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:35:47.189604  528268 fix.go:112] recreateIfNeeded on functional-123579: state=Running err=<nil>
	W1206 10:35:47.189624  528268 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:35:47.192615  528268 out.go:252] * Updating the running docker "functional-123579" container ...
	I1206 10:35:47.192637  528268 machine.go:94] provisionDockerMachine start ...
	I1206 10:35:47.192731  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:47.209670  528268 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:47.209990  528268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:35:47.209996  528268 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:35:47.362840  528268 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:35:47.362854  528268 ubuntu.go:182] provisioning hostname "functional-123579"
	I1206 10:35:47.362918  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:47.381544  528268 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:47.381860  528268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:35:47.381868  528268 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-123579 && echo "functional-123579" | sudo tee /etc/hostname
	I1206 10:35:47.544930  528268 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:35:47.545031  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:47.563487  528268 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:47.563810  528268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:35:47.563823  528268 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-123579' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-123579/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-123579' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:35:47.717170  528268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:35:47.717187  528268 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-484819/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-484819/.minikube}
	I1206 10:35:47.717204  528268 ubuntu.go:190] setting up certificates
	I1206 10:35:47.717211  528268 provision.go:84] configureAuth start
	I1206 10:35:47.717282  528268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:35:47.741856  528268 provision.go:143] copyHostCerts
	I1206 10:35:47.741924  528268 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem, removing ...
	I1206 10:35:47.741936  528268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 10:35:47.742009  528268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem (1082 bytes)
	I1206 10:35:47.742105  528268 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem, removing ...
	I1206 10:35:47.742109  528268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 10:35:47.742132  528268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem (1123 bytes)
	I1206 10:35:47.742180  528268 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem, removing ...
	I1206 10:35:47.742184  528268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 10:35:47.742206  528268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem (1675 bytes)
	I1206 10:35:47.742252  528268 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem org=jenkins.functional-123579 san=[127.0.0.1 192.168.49.2 functional-123579 localhost minikube]
	I1206 10:35:47.924439  528268 provision.go:177] copyRemoteCerts
	I1206 10:35:47.924500  528268 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:35:47.924538  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:47.942367  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:48.047397  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:35:48.065928  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:35:48.085149  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 10:35:48.103937  528268 provision.go:87] duration metric: took 386.701009ms to configureAuth
	I1206 10:35:48.103956  528268 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:35:48.104161  528268 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:35:48.104265  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.122386  528268 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:48.122699  528268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:35:48.122711  528268 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:35:48.484149  528268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:35:48.484161  528268 machine.go:97] duration metric: took 1.291517603s to provisionDockerMachine
	I1206 10:35:48.484171  528268 start.go:293] postStartSetup for "functional-123579" (driver="docker")
	I1206 10:35:48.484183  528268 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:35:48.484243  528268 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:35:48.484311  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.507680  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:48.615171  528268 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:35:48.618416  528268 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:35:48.618434  528268 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:35:48.618444  528268 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/addons for local assets ...
	I1206 10:35:48.618496  528268 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/files for local assets ...
	I1206 10:35:48.618569  528268 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> 4880682.pem in /etc/ssl/certs
	I1206 10:35:48.618650  528268 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts -> hosts in /etc/test/nested/copy/488068
	I1206 10:35:48.618693  528268 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488068
	I1206 10:35:48.626464  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:35:48.643882  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts --> /etc/test/nested/copy/488068/hosts (40 bytes)
	I1206 10:35:48.662582  528268 start.go:296] duration metric: took 178.395271ms for postStartSetup
	I1206 10:35:48.662675  528268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:35:48.662713  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.680751  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:48.784322  528268 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:35:48.789238  528268 fix.go:56] duration metric: took 1.616554387s for fixHost
	I1206 10:35:48.789253  528268 start.go:83] releasing machines lock for "functional-123579", held for 1.616594099s
	I1206 10:35:48.789324  528268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:35:48.807477  528268 ssh_runner.go:195] Run: cat /version.json
	I1206 10:35:48.807520  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.807562  528268 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:35:48.807618  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.828942  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:48.845083  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:49.020126  528268 ssh_runner.go:195] Run: systemctl --version
	I1206 10:35:49.026608  528268 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:35:49.065500  528268 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:35:49.069961  528268 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:35:49.070024  528268 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:35:49.077978  528268 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:35:49.077992  528268 start.go:496] detecting cgroup driver to use...
	I1206 10:35:49.078033  528268 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:35:49.078078  528268 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:35:49.093402  528268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:35:49.106707  528268 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:35:49.106771  528268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:35:49.122603  528268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:35:49.135424  528268 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:35:49.251969  528268 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:35:49.384025  528268 docker.go:234] disabling docker service ...
	I1206 10:35:49.384082  528268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:35:49.398904  528268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:35:49.412283  528268 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:35:49.535452  528268 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:35:49.651851  528268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:35:49.665735  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:35:49.680503  528268 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:35:49.680561  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.689947  528268 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:35:49.690006  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.699358  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.708725  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.718744  528268 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:35:49.727534  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.737013  528268 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.745582  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.754308  528268 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:35:49.762144  528268 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:35:49.769875  528268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:35:49.884338  528268 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:35:50.052236  528268 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:35:50.052348  528268 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:35:50.057582  528268 start.go:564] Will wait 60s for crictl version
	I1206 10:35:50.057651  528268 ssh_runner.go:195] Run: which crictl
	I1206 10:35:50.062638  528268 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:35:50.100652  528268 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:35:50.100743  528268 ssh_runner.go:195] Run: crio --version
	I1206 10:35:50.139579  528268 ssh_runner.go:195] Run: crio --version
	I1206 10:35:50.174800  528268 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:35:50.177732  528268 cli_runner.go:164] Run: docker network inspect functional-123579 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:35:50.194850  528268 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:35:50.201950  528268 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 10:35:50.204938  528268 kubeadm.go:884] updating cluster {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:35:50.205078  528268 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:35:50.205145  528268 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:35:50.240680  528268 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:35:50.240692  528268 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:35:50.240750  528268 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:35:50.267939  528268 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:35:50.267955  528268 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:35:50.267962  528268 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:35:50.268053  528268 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-123579 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:35:50.268129  528268 ssh_runner.go:195] Run: crio config
	I1206 10:35:50.326220  528268 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 10:35:50.326240  528268 cni.go:84] Creating CNI manager for ""
	I1206 10:35:50.326248  528268 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:35:50.326256  528268 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:35:50.326280  528268 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-123579 NodeName:functional-123579 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:35:50.326407  528268 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-123579"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:35:50.326477  528268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:35:50.334319  528268 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:35:50.334378  528268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:35:50.341826  528268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:35:50.354245  528268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:35:50.367015  528268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1206 10:35:50.379350  528268 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:35:50.382958  528268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:35:50.504018  528268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:35:50.930865  528268 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579 for IP: 192.168.49.2
	I1206 10:35:50.930875  528268 certs.go:195] generating shared ca certs ...
	I1206 10:35:50.930889  528268 certs.go:227] acquiring lock for ca certs: {Name:mk654f77abd8383620ce6ddae56f2a6a8c1d96d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:35:50.931046  528268 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key
	I1206 10:35:50.931093  528268 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key
	I1206 10:35:50.931099  528268 certs.go:257] generating profile certs ...
	I1206 10:35:50.931220  528268 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key
	I1206 10:35:50.931274  528268 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key.fda7c087
	I1206 10:35:50.931318  528268 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key
	I1206 10:35:50.931430  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem (1338 bytes)
	W1206 10:35:50.931460  528268 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068_empty.pem, impossibly tiny 0 bytes
	I1206 10:35:50.931466  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 10:35:50.931493  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:35:50.931515  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:35:50.931536  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem (1675 bytes)
	I1206 10:35:50.931577  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:35:50.932148  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:35:50.953643  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:35:50.975543  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:35:50.998708  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 10:35:51.019841  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:35:51.038179  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:35:51.055740  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:35:51.075573  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:35:51.094756  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /usr/share/ca-certificates/4880682.pem (1708 bytes)
	I1206 10:35:51.113922  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:35:51.132368  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem --> /usr/share/ca-certificates/488068.pem (1338 bytes)
	I1206 10:35:51.150650  528268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:35:51.163984  528268 ssh_runner.go:195] Run: openssl version
	I1206 10:35:51.171418  528268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4880682.pem
	I1206 10:35:51.179298  528268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4880682.pem /etc/ssl/certs/4880682.pem
	I1206 10:35:51.187013  528268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4880682.pem
	I1206 10:35:51.190756  528268 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 10:35:51.190814  528268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4880682.pem
	I1206 10:35:51.231889  528268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:35:51.239348  528268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:51.246609  528268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:35:51.254276  528268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:51.258574  528268 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:51.258631  528268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:51.301011  528268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:35:51.308790  528268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488068.pem
	I1206 10:35:51.316400  528268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488068.pem /etc/ssl/certs/488068.pem
	I1206 10:35:51.324195  528268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488068.pem
	I1206 10:35:51.328353  528268 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 10:35:51.328409  528268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488068.pem
	I1206 10:35:51.371753  528268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:35:51.379339  528268 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:35:51.383319  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:35:51.424469  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:35:51.465529  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:35:51.511345  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:35:51.565170  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:35:51.614532  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:35:51.665468  528268 kubeadm.go:401] StartCluster: {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:35:51.665553  528268 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:35:51.665612  528268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:35:51.699589  528268 cri.go:89] found id: ""
	I1206 10:35:51.699652  528268 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:35:51.708250  528268 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:35:51.708260  528268 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:35:51.708318  528268 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:35:51.716593  528268 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:35:51.717135  528268 kubeconfig.go:125] found "functional-123579" server: "https://192.168.49.2:8441"
	I1206 10:35:51.718506  528268 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:35:51.728290  528268 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 10:21:13.758601441 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 10:35:50.371679399 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1206 10:35:51.728307  528268 kubeadm.go:1161] stopping kube-system containers ...
	I1206 10:35:51.728319  528268 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 10:35:51.728381  528268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:35:51.763757  528268 cri.go:89] found id: ""
	I1206 10:35:51.763820  528268 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 10:35:51.777420  528268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:35:51.785097  528268 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  6 10:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  6 10:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  6 10:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  6 10:25 /etc/kubernetes/scheduler.conf
	
	I1206 10:35:51.785162  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:35:51.792642  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:35:51.800316  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:35:51.800387  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:35:51.808313  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:35:51.815662  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:35:51.815715  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:35:51.823153  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:35:51.831093  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:35:51.831167  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:35:51.838577  528268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:35:51.846346  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:51.894809  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:52.979571  528268 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.084737023s)
	I1206 10:35:52.979630  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:53.188528  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:53.255794  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:53.309672  528268 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:35:53.309740  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:53.810758  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:54.309899  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:54.810832  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:55.309958  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:55.809819  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:56.310103  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:56.809902  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:57.309923  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:57.809975  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:58.310731  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:58.809924  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:59.310585  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:59.810731  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:00.309923  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:00.810538  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:01.310473  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:01.810374  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:02.310412  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:02.809925  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:03.309918  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:03.810667  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:04.310497  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:04.810559  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:05.310616  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:05.810787  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:06.310760  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:06.810542  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:07.310481  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:07.810515  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:08.310271  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:08.810300  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:09.309935  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:09.809899  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:10.310756  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:10.809928  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:11.309919  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:11.809916  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:12.310322  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:12.809962  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:13.309904  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:13.809901  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:14.309825  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:14.809939  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:15.309858  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:15.810769  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:16.310915  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:16.809905  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:17.310298  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:17.809935  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:18.310774  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:18.810876  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:19.310588  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:19.810539  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:20.309961  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:20.810313  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:21.310718  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:21.810176  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:22.310761  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:22.809819  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:23.310605  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:23.810607  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:24.310709  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:24.810672  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:25.309883  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:25.810296  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:26.309901  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:26.810157  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:27.310838  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:27.810698  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:28.309956  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:28.809934  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:29.310713  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:29.810598  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:30.310564  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:30.809937  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:31.309915  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:31.810618  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:32.310478  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:32.809942  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:33.310175  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:33.810817  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:34.310221  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:34.810764  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:35.309907  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:35.810700  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:36.310275  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:36.810581  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:37.310397  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:37.809951  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:38.310518  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:38.810174  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:39.310213  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:39.810271  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:40.309911  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:40.810748  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:41.310557  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:41.810632  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:42.309870  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:42.810506  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:43.309942  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:43.810676  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:44.310713  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:44.810703  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:45.310440  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:45.810823  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:46.309845  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:46.810726  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:47.310769  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:47.809917  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:48.310694  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:48.810273  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:49.310273  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:49.810301  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:50.309899  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:50.809907  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:51.309963  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:51.810551  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:52.310532  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:52.810599  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:53.310630  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:36:53.310706  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:36:53.342266  528268 cri.go:89] found id: ""
	I1206 10:36:53.342280  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.342287  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:36:53.342292  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:36:53.342356  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:36:53.368755  528268 cri.go:89] found id: ""
	I1206 10:36:53.368774  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.368781  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:36:53.368785  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:36:53.368846  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:36:53.393431  528268 cri.go:89] found id: ""
	I1206 10:36:53.393447  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.393454  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:36:53.393459  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:36:53.393515  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:36:53.418954  528268 cri.go:89] found id: ""
	I1206 10:36:53.418967  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.418974  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:36:53.418979  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:36:53.419036  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:36:53.444726  528268 cri.go:89] found id: ""
	I1206 10:36:53.444740  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.444747  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:36:53.444752  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:36:53.444809  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:36:53.469041  528268 cri.go:89] found id: ""
	I1206 10:36:53.469054  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.469062  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:36:53.469067  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:36:53.469122  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:36:53.494455  528268 cri.go:89] found id: ""
	I1206 10:36:53.494468  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.494475  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:36:53.494483  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:36:53.494496  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:36:53.557127  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:36:53.549369   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.549959   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.551594   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.551939   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.553382   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:36:53.549369   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.549959   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.551594   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.551939   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.553382   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:36:53.557137  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:36:53.557148  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:36:53.629870  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:36:53.629900  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:36:53.661451  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:36:53.661466  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:36:53.730909  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:36:53.730927  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:36:56.247245  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:56.257306  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:36:56.257364  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:36:56.286141  528268 cri.go:89] found id: ""
	I1206 10:36:56.286155  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.286163  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:36:56.286168  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:36:56.286228  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:36:56.313467  528268 cri.go:89] found id: ""
	I1206 10:36:56.313481  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.313488  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:36:56.313499  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:36:56.313559  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:36:56.340777  528268 cri.go:89] found id: ""
	I1206 10:36:56.340791  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.340798  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:36:56.340803  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:36:56.340862  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:36:56.367085  528268 cri.go:89] found id: ""
	I1206 10:36:56.367099  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.367106  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:36:56.367111  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:36:56.367188  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:36:56.392392  528268 cri.go:89] found id: ""
	I1206 10:36:56.392407  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.392414  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:36:56.392420  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:36:56.392482  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:36:56.417786  528268 cri.go:89] found id: ""
	I1206 10:36:56.417799  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.417807  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:36:56.417812  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:36:56.417871  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:36:56.443872  528268 cri.go:89] found id: ""
	I1206 10:36:56.443886  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.443893  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:36:56.443901  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:36:56.443911  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:36:56.509704  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:36:56.509723  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:36:56.524726  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:36:56.524742  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:36:56.590779  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:36:56.582349   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.583075   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.584764   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.585326   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.586966   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:36:56.582349   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.583075   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.584764   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.585326   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.586966   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:36:56.590789  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:36:56.590799  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:36:56.657863  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:36:56.657883  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:36:59.188879  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:59.199665  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:36:59.199726  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:36:59.232126  528268 cri.go:89] found id: ""
	I1206 10:36:59.232140  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.232148  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:36:59.232153  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:36:59.232212  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:36:59.257550  528268 cri.go:89] found id: ""
	I1206 10:36:59.257564  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.257571  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:36:59.257576  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:36:59.257633  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:36:59.282608  528268 cri.go:89] found id: ""
	I1206 10:36:59.282623  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.282630  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:36:59.282636  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:36:59.282698  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:36:59.312791  528268 cri.go:89] found id: ""
	I1206 10:36:59.312806  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.312813  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:36:59.312819  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:36:59.312881  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:36:59.339361  528268 cri.go:89] found id: ""
	I1206 10:36:59.339376  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.339383  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:36:59.339388  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:36:59.339447  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:36:59.366255  528268 cri.go:89] found id: ""
	I1206 10:36:59.366269  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.366276  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:36:59.366281  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:36:59.366339  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:36:59.394131  528268 cri.go:89] found id: ""
	I1206 10:36:59.394145  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.394152  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:36:59.394172  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:36:59.394182  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:36:59.462514  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:36:59.462536  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:36:59.491731  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:36:59.491747  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:36:59.562406  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:36:59.562426  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:36:59.577286  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:36:59.577302  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:36:59.642145  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:36:59.633850   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.634393   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.636035   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.636643   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.638279   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:36:59.633850   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.634393   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.636035   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.636643   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.638279   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:02.143135  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:02.153343  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:02.153402  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:02.182430  528268 cri.go:89] found id: ""
	I1206 10:37:02.182453  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.182460  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:02.182466  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:02.182529  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:02.217140  528268 cri.go:89] found id: ""
	I1206 10:37:02.217164  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.217171  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:02.217176  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:02.217241  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:02.264761  528268 cri.go:89] found id: ""
	I1206 10:37:02.264775  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.264795  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:02.264800  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:02.264857  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:02.295104  528268 cri.go:89] found id: ""
	I1206 10:37:02.295118  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.295161  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:02.295166  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:02.295232  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:02.324690  528268 cri.go:89] found id: ""
	I1206 10:37:02.324704  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.324711  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:02.324716  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:02.324776  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:02.354165  528268 cri.go:89] found id: ""
	I1206 10:37:02.354179  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.354187  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:02.354192  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:02.354250  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:02.379657  528268 cri.go:89] found id: ""
	I1206 10:37:02.379671  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.379679  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:02.379686  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:02.379697  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:02.449725  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:02.449746  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:02.464766  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:02.464783  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:02.527444  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:02.518942   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.519712   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.521458   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.522038   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.523598   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:02.518942   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.519712   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.521458   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.522038   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.523598   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:02.527457  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:02.527467  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:02.595482  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:02.595503  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:05.126581  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:05.136725  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:05.136783  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:05.162008  528268 cri.go:89] found id: ""
	I1206 10:37:05.162022  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.162049  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:05.162055  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:05.162123  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:05.190290  528268 cri.go:89] found id: ""
	I1206 10:37:05.190305  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.190313  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:05.190318  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:05.190399  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:05.222971  528268 cri.go:89] found id: ""
	I1206 10:37:05.223000  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.223008  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:05.223013  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:05.223083  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:05.249192  528268 cri.go:89] found id: ""
	I1206 10:37:05.249206  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.249213  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:05.249218  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:05.249285  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:05.280084  528268 cri.go:89] found id: ""
	I1206 10:37:05.280097  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.280104  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:05.280110  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:05.280176  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:05.306008  528268 cri.go:89] found id: ""
	I1206 10:37:05.306036  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.306044  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:05.306049  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:05.306115  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:05.331829  528268 cri.go:89] found id: ""
	I1206 10:37:05.331843  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.331850  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:05.331858  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:05.331868  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:05.394775  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:05.386653   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.387484   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.389032   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.389488   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.390957   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:05.386653   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.387484   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.389032   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.389488   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.390957   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:05.394787  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:05.394798  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:05.463063  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:05.463082  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:05.496791  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:05.496808  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:05.562749  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:05.562768  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:08.077865  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:08.088556  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:08.088628  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:08.114942  528268 cri.go:89] found id: ""
	I1206 10:37:08.114956  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.114963  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:08.114969  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:08.115027  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:08.141141  528268 cri.go:89] found id: ""
	I1206 10:37:08.141155  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.141162  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:08.141167  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:08.141235  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:08.166303  528268 cri.go:89] found id: ""
	I1206 10:37:08.166318  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.166325  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:08.166334  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:08.166394  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:08.199234  528268 cri.go:89] found id: ""
	I1206 10:37:08.199248  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.199255  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:08.199260  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:08.199326  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:08.231753  528268 cri.go:89] found id: ""
	I1206 10:37:08.231767  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.231774  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:08.231780  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:08.231842  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:08.260152  528268 cri.go:89] found id: ""
	I1206 10:37:08.260166  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.260173  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:08.260179  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:08.260241  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:08.285346  528268 cri.go:89] found id: ""
	I1206 10:37:08.285360  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.285367  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:08.285378  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:08.285388  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:08.353719  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:08.353740  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:08.385085  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:08.385101  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:08.459734  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:08.459762  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:08.474846  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:08.474862  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:08.546432  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:08.537844   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.538577   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.540294   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.540933   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.542525   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:08.537844   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.538577   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.540294   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.540933   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.542525   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:11.048129  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:11.058654  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:11.058714  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:11.086873  528268 cri.go:89] found id: ""
	I1206 10:37:11.086889  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.086896  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:11.086903  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:11.086965  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:11.113880  528268 cri.go:89] found id: ""
	I1206 10:37:11.113904  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.113912  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:11.113918  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:11.113987  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:11.142338  528268 cri.go:89] found id: ""
	I1206 10:37:11.142361  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.142370  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:11.142375  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:11.142448  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:11.168341  528268 cri.go:89] found id: ""
	I1206 10:37:11.168355  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.168362  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:11.168368  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:11.168425  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:11.218236  528268 cri.go:89] found id: ""
	I1206 10:37:11.218277  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.218285  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:11.218290  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:11.218357  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:11.257366  528268 cri.go:89] found id: ""
	I1206 10:37:11.257379  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.257386  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:11.257391  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:11.257455  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:11.283202  528268 cri.go:89] found id: ""
	I1206 10:37:11.283224  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.283235  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:11.283251  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:11.283269  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:11.349630  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:11.349650  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:11.365578  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:11.365606  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:11.431959  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:11.422904   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.423556   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.425277   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.425941   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.427652   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:11.422904   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.423556   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.425277   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.425941   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.427652   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:11.431970  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:11.431981  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:11.502903  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:11.502922  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:14.032953  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:14.043177  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:14.043291  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:14.068855  528268 cri.go:89] found id: ""
	I1206 10:37:14.068870  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.068877  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:14.068882  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:14.068946  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:14.094277  528268 cri.go:89] found id: ""
	I1206 10:37:14.094290  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.094308  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:14.094315  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:14.094372  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:14.119916  528268 cri.go:89] found id: ""
	I1206 10:37:14.119930  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.119948  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:14.119954  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:14.120029  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:14.144999  528268 cri.go:89] found id: ""
	I1206 10:37:14.145012  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.145020  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:14.145026  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:14.145088  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:14.170372  528268 cri.go:89] found id: ""
	I1206 10:37:14.170386  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.170404  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:14.170409  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:14.170475  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:14.220015  528268 cri.go:89] found id: ""
	I1206 10:37:14.220029  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.220036  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:14.220041  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:14.220102  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:14.249187  528268 cri.go:89] found id: ""
	I1206 10:37:14.249201  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.249208  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:14.249216  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:14.249226  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:14.315809  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:14.315830  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:14.331228  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:14.331245  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:14.394665  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:14.386558   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.387326   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.388992   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.389309   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.390775   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:14.386558   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.387326   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.388992   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.389309   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.390775   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:14.394676  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:14.394686  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:14.466599  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:14.466623  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:16.996304  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:17.008394  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:17.008453  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:17.036500  528268 cri.go:89] found id: ""
	I1206 10:37:17.036513  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.036521  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:17.036526  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:17.036591  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:17.064759  528268 cri.go:89] found id: ""
	I1206 10:37:17.064773  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.064780  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:17.064785  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:17.064846  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:17.095263  528268 cri.go:89] found id: ""
	I1206 10:37:17.095276  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.095284  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:17.095300  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:17.095364  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:17.121651  528268 cri.go:89] found id: ""
	I1206 10:37:17.121665  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.121673  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:17.121678  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:17.121747  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:17.148683  528268 cri.go:89] found id: ""
	I1206 10:37:17.148697  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.148704  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:17.148711  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:17.148773  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:17.180504  528268 cri.go:89] found id: ""
	I1206 10:37:17.180518  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.180535  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:17.180542  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:17.180611  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:17.208816  528268 cri.go:89] found id: ""
	I1206 10:37:17.208830  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.208837  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:17.208844  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:17.208854  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:17.277798  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:17.277818  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:17.292728  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:17.292743  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:17.366791  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:17.357858   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.358712   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.360589   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.361199   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.362779   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:17.357858   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.358712   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.360589   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.361199   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.362779   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:17.366801  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:17.366812  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:17.434192  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:17.434212  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:19.971273  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:19.981226  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:19.981286  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:20.019762  528268 cri.go:89] found id: ""
	I1206 10:37:20.019777  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.019785  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:20.019791  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:20.019866  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:20.047256  528268 cri.go:89] found id: ""
	I1206 10:37:20.047270  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.047278  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:20.047283  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:20.047345  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:20.075694  528268 cri.go:89] found id: ""
	I1206 10:37:20.075708  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.075716  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:20.075721  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:20.075785  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:20.105896  528268 cri.go:89] found id: ""
	I1206 10:37:20.105910  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.105917  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:20.105922  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:20.105981  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:20.131910  528268 cri.go:89] found id: ""
	I1206 10:37:20.131923  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.131930  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:20.131935  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:20.131997  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:20.157115  528268 cri.go:89] found id: ""
	I1206 10:37:20.157129  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.157135  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:20.157140  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:20.157202  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:20.188374  528268 cri.go:89] found id: ""
	I1206 10:37:20.188394  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.188401  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:20.188423  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:20.188434  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:20.267587  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:20.267607  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:20.283222  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:20.283238  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:20.348772  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:20.340427   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.341070   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.342551   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.342988   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.344527   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:20.340427   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.341070   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.342551   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.342988   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.344527   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:20.348783  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:20.348796  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:20.415451  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:20.415474  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:22.948223  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:22.959160  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:22.959221  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:22.985131  528268 cri.go:89] found id: ""
	I1206 10:37:22.985144  528268 logs.go:282] 0 containers: []
	W1206 10:37:22.985151  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:22.985156  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:22.985242  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:23.012336  528268 cri.go:89] found id: ""
	I1206 10:37:23.012350  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.012358  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:23.012363  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:23.012433  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:23.037784  528268 cri.go:89] found id: ""
	I1206 10:37:23.037808  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.037816  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:23.037822  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:23.037899  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:23.066240  528268 cri.go:89] found id: ""
	I1206 10:37:23.066254  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.066262  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:23.066267  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:23.066335  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:23.090898  528268 cri.go:89] found id: ""
	I1206 10:37:23.090912  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.090921  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:23.090926  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:23.090993  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:23.116011  528268 cri.go:89] found id: ""
	I1206 10:37:23.116039  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.116047  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:23.116052  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:23.116127  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:23.140768  528268 cri.go:89] found id: ""
	I1206 10:37:23.140781  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.140788  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:23.140796  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:23.140806  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:23.210300  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:23.210319  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:23.229296  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:23.229311  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:23.297415  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:23.288972   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.289757   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.291364   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.291944   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.293619   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:23.288972   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.289757   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.291364   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.291944   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.293619   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:23.297428  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:23.297438  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:23.364180  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:23.364200  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:25.892120  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:25.902322  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:25.902381  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:25.931154  528268 cri.go:89] found id: ""
	I1206 10:37:25.931168  528268 logs.go:282] 0 containers: []
	W1206 10:37:25.931175  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:25.931180  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:25.931245  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:25.957709  528268 cri.go:89] found id: ""
	I1206 10:37:25.957724  528268 logs.go:282] 0 containers: []
	W1206 10:37:25.957731  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:25.957736  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:25.957793  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:25.985765  528268 cri.go:89] found id: ""
	I1206 10:37:25.985779  528268 logs.go:282] 0 containers: []
	W1206 10:37:25.985786  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:25.985791  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:25.985849  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:26.016739  528268 cri.go:89] found id: ""
	I1206 10:37:26.016859  528268 logs.go:282] 0 containers: []
	W1206 10:37:26.016867  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:26.016873  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:26.016945  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:26.043228  528268 cri.go:89] found id: ""
	I1206 10:37:26.043242  528268 logs.go:282] 0 containers: []
	W1206 10:37:26.043252  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:26.043258  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:26.043331  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:26.069862  528268 cri.go:89] found id: ""
	I1206 10:37:26.069888  528268 logs.go:282] 0 containers: []
	W1206 10:37:26.069896  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:26.069902  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:26.069979  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:26.097635  528268 cri.go:89] found id: ""
	I1206 10:37:26.097651  528268 logs.go:282] 0 containers: []
	W1206 10:37:26.097659  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:26.097666  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:26.097677  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:26.163107  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:26.163132  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:26.177703  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:26.177723  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:26.254904  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:26.246698   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.247514   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.249003   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.249473   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.250911   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:26.246698   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.247514   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.249003   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.249473   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.250911   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:26.254915  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:26.254927  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:26.322703  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:26.322723  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:28.850178  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:28.860819  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:28.860878  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:28.887162  528268 cri.go:89] found id: ""
	I1206 10:37:28.887175  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.887183  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:28.887188  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:28.887246  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:28.912223  528268 cri.go:89] found id: ""
	I1206 10:37:28.912237  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.912251  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:28.912256  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:28.912318  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:28.937893  528268 cri.go:89] found id: ""
	I1206 10:37:28.937907  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.937914  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:28.937920  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:28.937979  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:28.966798  528268 cri.go:89] found id: ""
	I1206 10:37:28.966812  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.966819  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:28.966825  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:28.966887  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:28.994392  528268 cri.go:89] found id: ""
	I1206 10:37:28.994406  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.994413  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:28.994418  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:28.994480  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:29.020703  528268 cri.go:89] found id: ""
	I1206 10:37:29.020718  528268 logs.go:282] 0 containers: []
	W1206 10:37:29.020725  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:29.020730  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:29.020788  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:29.049956  528268 cri.go:89] found id: ""
	I1206 10:37:29.049969  528268 logs.go:282] 0 containers: []
	W1206 10:37:29.049977  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:29.049986  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:29.049998  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:29.116113  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:29.116133  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:29.130937  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:29.130954  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:29.199649  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:29.191077   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.191848   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.193554   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.193889   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.195340   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:29.191077   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.191848   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.193554   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.193889   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.195340   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:29.199659  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:29.199670  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:29.271990  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:29.272011  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:31.801925  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:31.812057  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:31.812130  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:31.837642  528268 cri.go:89] found id: ""
	I1206 10:37:31.837656  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.837663  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:31.837668  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:31.837724  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:31.863706  528268 cri.go:89] found id: ""
	I1206 10:37:31.863721  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.863728  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:31.863733  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:31.863795  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:31.892284  528268 cri.go:89] found id: ""
	I1206 10:37:31.892298  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.892305  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:31.892310  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:31.892370  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:31.920973  528268 cri.go:89] found id: ""
	I1206 10:37:31.920987  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.920994  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:31.920999  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:31.921072  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:31.946196  528268 cri.go:89] found id: ""
	I1206 10:37:31.946209  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.946216  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:31.946221  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:31.946280  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:31.972154  528268 cri.go:89] found id: ""
	I1206 10:37:31.972168  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.972176  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:31.972182  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:31.972273  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:31.998166  528268 cri.go:89] found id: ""
	I1206 10:37:31.998179  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.998194  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:31.998202  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:31.998212  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:32.066002  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:32.066020  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:32.081440  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:32.081456  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:32.155010  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:32.146683   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.147230   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.149014   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.149511   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.151065   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:32.146683   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.147230   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.149014   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.149511   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.151065   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:32.155021  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:32.155032  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:32.239005  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:32.239035  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:34.779578  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:34.789994  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:34.790061  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:34.817069  528268 cri.go:89] found id: ""
	I1206 10:37:34.817083  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.817091  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:34.817096  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:34.817154  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:34.843456  528268 cri.go:89] found id: ""
	I1206 10:37:34.843470  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.843478  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:34.843483  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:34.843540  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:34.873150  528268 cri.go:89] found id: ""
	I1206 10:37:34.873164  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.873171  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:34.873176  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:34.873236  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:34.901463  528268 cri.go:89] found id: ""
	I1206 10:37:34.901476  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.901483  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:34.901489  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:34.901546  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:34.930362  528268 cri.go:89] found id: ""
	I1206 10:37:34.930376  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.930383  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:34.930389  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:34.930460  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:34.955907  528268 cri.go:89] found id: ""
	I1206 10:37:34.955920  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.955928  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:34.955936  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:34.955997  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:34.981646  528268 cri.go:89] found id: ""
	I1206 10:37:34.981660  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.981667  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:34.981676  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:34.981690  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:35.051925  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:35.051946  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:35.067379  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:35.067395  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:35.132911  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:35.124444   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.125082   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.126771   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.127367   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.128903   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:35.124444   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.125082   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.126771   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.127367   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.128903   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:35.132921  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:35.132932  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:35.203071  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:35.203091  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:37.738787  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:37.749325  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:37.749395  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:37.777933  528268 cri.go:89] found id: ""
	I1206 10:37:37.777947  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.777955  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:37.777961  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:37.778018  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:37.803626  528268 cri.go:89] found id: ""
	I1206 10:37:37.803640  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.803647  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:37.803652  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:37.803711  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:37.829518  528268 cri.go:89] found id: ""
	I1206 10:37:37.829532  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.829540  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:37.829545  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:37.829608  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:37.854832  528268 cri.go:89] found id: ""
	I1206 10:37:37.854846  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.854853  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:37.854858  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:37.854918  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:37.879627  528268 cri.go:89] found id: ""
	I1206 10:37:37.879641  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.879649  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:37.879654  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:37.879712  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:37.906054  528268 cri.go:89] found id: ""
	I1206 10:37:37.906067  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.906074  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:37.906080  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:37.906137  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:37.931611  528268 cri.go:89] found id: ""
	I1206 10:37:37.931624  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.931632  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:37.931640  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:37.931651  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:37.997740  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:37.997760  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:38.023284  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:38.023303  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:38.091986  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:38.082741   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.083460   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.085430   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.086101   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.087877   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:38.082741   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.083460   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.085430   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.086101   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.087877   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:38.092014  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:38.092027  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:38.163320  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:38.163343  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:40.709445  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:40.720016  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:40.720077  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:40.745539  528268 cri.go:89] found id: ""
	I1206 10:37:40.745554  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.745561  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:40.745566  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:40.745630  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:40.775524  528268 cri.go:89] found id: ""
	I1206 10:37:40.775538  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.775546  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:40.775552  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:40.775612  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:40.800974  528268 cri.go:89] found id: ""
	I1206 10:37:40.800988  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.800995  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:40.801001  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:40.801064  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:40.825855  528268 cri.go:89] found id: ""
	I1206 10:37:40.825869  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.825877  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:40.825882  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:40.825940  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:40.856039  528268 cri.go:89] found id: ""
	I1206 10:37:40.856052  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.856059  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:40.856064  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:40.856129  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:40.886499  528268 cri.go:89] found id: ""
	I1206 10:37:40.886513  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.886520  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:40.886527  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:40.886586  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:40.913975  528268 cri.go:89] found id: ""
	I1206 10:37:40.913989  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.913996  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:40.914004  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:40.914014  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:40.979882  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:40.979904  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:40.995137  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:40.995155  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:41.060228  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:41.051325   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.052002   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.053633   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.054141   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.055869   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:41.051325   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.052002   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.053633   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.054141   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.055869   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:41.060245  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:41.060258  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:41.130025  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:41.130046  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:43.659238  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:43.669354  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:43.669430  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:43.694872  528268 cri.go:89] found id: ""
	I1206 10:37:43.694886  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.694893  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:43.694899  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:43.694956  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:43.720265  528268 cri.go:89] found id: ""
	I1206 10:37:43.720278  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.720286  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:43.720290  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:43.720349  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:43.746213  528268 cri.go:89] found id: ""
	I1206 10:37:43.746226  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.746234  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:43.746239  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:43.746300  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:43.771902  528268 cri.go:89] found id: ""
	I1206 10:37:43.771916  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.771923  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:43.771928  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:43.771984  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:43.797840  528268 cri.go:89] found id: ""
	I1206 10:37:43.797854  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.797874  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:43.797879  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:43.797949  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:43.823569  528268 cri.go:89] found id: ""
	I1206 10:37:43.823583  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.823590  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:43.823596  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:43.823654  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:43.850154  528268 cri.go:89] found id: ""
	I1206 10:37:43.850169  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.850187  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:43.850196  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:43.850207  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:43.919668  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:43.919690  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:43.954253  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:43.954269  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:44.019533  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:44.019556  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:44.034911  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:44.034930  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:44.098130  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:44.089450   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.090461   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.091451   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.092313   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.093171   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:44.089450   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.090461   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.091451   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.092313   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.093171   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:46.599796  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:46.610343  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:46.610410  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:46.637289  528268 cri.go:89] found id: ""
	I1206 10:37:46.637304  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.637311  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:46.637317  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:46.637380  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:46.664098  528268 cri.go:89] found id: ""
	I1206 10:37:46.664112  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.664118  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:46.664123  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:46.664183  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:46.693606  528268 cri.go:89] found id: ""
	I1206 10:37:46.693619  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.693638  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:46.693644  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:46.693718  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:46.719425  528268 cri.go:89] found id: ""
	I1206 10:37:46.719438  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.719445  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:46.719451  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:46.719511  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:46.748960  528268 cri.go:89] found id: ""
	I1206 10:37:46.748974  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.748982  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:46.748987  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:46.749047  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:46.782749  528268 cri.go:89] found id: ""
	I1206 10:37:46.782763  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.782770  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:46.782776  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:46.782846  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:46.807615  528268 cri.go:89] found id: ""
	I1206 10:37:46.807629  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.807636  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:46.807644  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:46.807654  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:46.838618  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:46.838634  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:46.905518  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:46.905537  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:46.920399  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:46.920417  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:46.985957  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:46.978179   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.978741   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.980269   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.980715   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.982218   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:46.978179   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.978741   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.980269   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.980715   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.982218   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:46.985968  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:46.985981  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:49.555258  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:49.565209  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:49.565266  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:49.593833  528268 cri.go:89] found id: ""
	I1206 10:37:49.593846  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.593853  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:49.593858  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:49.593914  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:49.621098  528268 cri.go:89] found id: ""
	I1206 10:37:49.621111  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.621119  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:49.621124  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:49.621203  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:49.645669  528268 cri.go:89] found id: ""
	I1206 10:37:49.645681  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.645689  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:49.645694  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:49.645750  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:49.672058  528268 cri.go:89] found id: ""
	I1206 10:37:49.672072  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.672080  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:49.672085  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:49.672140  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:49.696988  528268 cri.go:89] found id: ""
	I1206 10:37:49.697002  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.697009  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:49.697015  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:49.697076  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:49.723261  528268 cri.go:89] found id: ""
	I1206 10:37:49.723275  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.723282  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:49.723287  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:49.723357  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:49.750307  528268 cri.go:89] found id: ""
	I1206 10:37:49.750321  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.750328  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:49.750336  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:49.750346  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:49.765699  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:49.765721  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:49.827929  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:49.819281   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.820177   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.821896   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.822193   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.823677   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:49.819281   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.820177   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.821896   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.822193   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.823677   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:49.827938  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:49.827962  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:49.899802  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:49.899820  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:49.928018  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:49.928035  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:52.495744  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:52.505888  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:52.505958  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:52.532610  528268 cri.go:89] found id: ""
	I1206 10:37:52.532623  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.532631  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:52.532636  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:52.532695  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:52.558679  528268 cri.go:89] found id: ""
	I1206 10:37:52.558692  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.558700  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:52.558705  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:52.558762  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:52.585203  528268 cri.go:89] found id: ""
	I1206 10:37:52.585217  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.585225  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:52.585230  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:52.585286  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:52.611483  528268 cri.go:89] found id: ""
	I1206 10:37:52.611496  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.611503  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:52.611510  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:52.611568  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:52.638054  528268 cri.go:89] found id: ""
	I1206 10:37:52.638067  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.638075  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:52.638080  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:52.638137  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:52.666746  528268 cri.go:89] found id: ""
	I1206 10:37:52.666760  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.666767  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:52.666773  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:52.666833  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:52.691974  528268 cri.go:89] found id: ""
	I1206 10:37:52.691997  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.692005  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:52.692015  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:52.692025  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:52.761093  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:52.761113  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:52.790376  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:52.790392  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:52.858897  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:52.858915  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:52.873906  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:52.873923  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:52.937907  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:52.929773   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.930648   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.932194   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.932561   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.934055   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:52.929773   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.930648   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.932194   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.932561   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.934055   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:55.439279  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:55.450466  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:55.450529  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:55.483494  528268 cri.go:89] found id: ""
	I1206 10:37:55.483508  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.483515  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:55.483520  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:55.483576  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:55.515860  528268 cri.go:89] found id: ""
	I1206 10:37:55.515874  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.515881  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:55.515886  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:55.515942  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:55.542224  528268 cri.go:89] found id: ""
	I1206 10:37:55.542239  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.542248  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:55.542253  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:55.542311  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:55.567547  528268 cri.go:89] found id: ""
	I1206 10:37:55.567561  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.567568  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:55.567574  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:55.567630  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:55.594478  528268 cri.go:89] found id: ""
	I1206 10:37:55.594491  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.594499  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:55.594505  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:55.594568  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:55.620118  528268 cri.go:89] found id: ""
	I1206 10:37:55.620132  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.620146  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:55.620151  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:55.620210  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:55.644692  528268 cri.go:89] found id: ""
	I1206 10:37:55.644706  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.644713  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:55.644721  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:55.644732  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:55.712056  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:55.702146   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.702755   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.704324   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.704667   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.708009   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:55.702146   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.702755   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.704324   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.704667   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.708009   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:55.712075  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:55.712085  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:55.782393  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:55.782414  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:55.817896  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:55.817913  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:55.892357  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:55.892385  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:58.407847  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:58.417968  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:58.418026  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:58.446859  528268 cri.go:89] found id: ""
	I1206 10:37:58.446872  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.446879  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:58.446884  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:58.446946  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:58.475161  528268 cri.go:89] found id: ""
	I1206 10:37:58.475175  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.475182  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:58.475187  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:58.475244  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:58.503498  528268 cri.go:89] found id: ""
	I1206 10:37:58.503513  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.503520  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:58.503525  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:58.503583  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:58.529955  528268 cri.go:89] found id: ""
	I1206 10:37:58.529970  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.529977  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:58.529983  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:58.530038  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:58.557174  528268 cri.go:89] found id: ""
	I1206 10:37:58.557188  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.557196  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:58.557201  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:58.557259  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:58.586116  528268 cri.go:89] found id: ""
	I1206 10:37:58.586130  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.586149  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:58.586156  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:58.586211  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:58.620339  528268 cri.go:89] found id: ""
	I1206 10:37:58.620353  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.620361  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:58.620368  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:58.620379  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:58.686086  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:58.686105  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:58.700471  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:58.700487  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:58.772759  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:58.764751   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.765482   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.767041   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.767492   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.769066   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:58.764751   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.765482   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.767041   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.767492   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.769066   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:58.772768  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:58.772779  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:58.841699  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:58.841718  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:01.372136  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:01.382712  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:01.382776  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:01.410577  528268 cri.go:89] found id: ""
	I1206 10:38:01.410591  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.410598  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:01.410603  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:01.410666  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:01.444228  528268 cri.go:89] found id: ""
	I1206 10:38:01.444251  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.444258  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:01.444264  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:01.444331  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:01.486632  528268 cri.go:89] found id: ""
	I1206 10:38:01.486645  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.486652  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:01.486657  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:01.486717  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:01.518190  528268 cri.go:89] found id: ""
	I1206 10:38:01.518203  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.518210  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:01.518215  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:01.518276  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:01.543942  528268 cri.go:89] found id: ""
	I1206 10:38:01.543956  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.543963  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:01.543968  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:01.544032  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:01.569769  528268 cri.go:89] found id: ""
	I1206 10:38:01.569803  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.569832  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:01.569845  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:01.569902  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:01.594441  528268 cri.go:89] found id: ""
	I1206 10:38:01.594456  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.594463  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:01.594471  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:01.594482  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:01.609124  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:01.609139  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:01.671291  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:01.663080   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.663834   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.665465   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.665773   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.667299   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:01.663080   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.663834   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.665465   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.665773   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.667299   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:01.671302  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:01.671312  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:01.739749  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:01.739769  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:01.768671  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:01.768687  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:04.339038  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:04.349363  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:04.349432  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:04.375032  528268 cri.go:89] found id: ""
	I1206 10:38:04.375045  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.375052  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:04.375058  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:04.375139  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:04.399997  528268 cri.go:89] found id: ""
	I1206 10:38:04.400011  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.400018  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:04.400023  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:04.400081  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:04.424851  528268 cri.go:89] found id: ""
	I1206 10:38:04.424876  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.424884  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:04.424889  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:04.424959  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:04.453149  528268 cri.go:89] found id: ""
	I1206 10:38:04.453162  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.453170  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:04.453175  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:04.453263  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:04.483514  528268 cri.go:89] found id: ""
	I1206 10:38:04.483527  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.483534  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:04.483540  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:04.483598  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:04.511967  528268 cri.go:89] found id: ""
	I1206 10:38:04.511980  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.511987  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:04.511993  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:04.512048  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:04.541164  528268 cri.go:89] found id: ""
	I1206 10:38:04.541175  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.541182  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:04.541190  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:04.541199  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:04.575975  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:04.575991  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:04.642763  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:04.642781  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:04.657313  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:04.657336  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:04.721928  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:04.713076   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.713820   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.715564   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.716200   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.717981   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:04.713076   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.713820   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.715564   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.716200   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.717981   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:04.721939  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:04.721952  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:07.293453  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:07.303645  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:07.303708  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:07.329285  528268 cri.go:89] found id: ""
	I1206 10:38:07.329299  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.329306  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:07.329313  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:07.329371  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:07.354889  528268 cri.go:89] found id: ""
	I1206 10:38:07.354903  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.354911  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:07.354916  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:07.354975  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:07.380496  528268 cri.go:89] found id: ""
	I1206 10:38:07.380510  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.380518  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:07.380523  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:07.380583  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:07.408252  528268 cri.go:89] found id: ""
	I1206 10:38:07.408265  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.408272  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:07.408278  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:07.408341  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:07.434563  528268 cri.go:89] found id: ""
	I1206 10:38:07.434577  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.434584  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:07.434590  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:07.434656  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:07.465668  528268 cri.go:89] found id: ""
	I1206 10:38:07.465681  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.465688  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:07.465694  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:07.465755  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:07.496206  528268 cri.go:89] found id: ""
	I1206 10:38:07.496220  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.496227  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:07.496252  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:07.496291  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:07.561228  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:07.561250  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:07.576434  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:07.576450  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:07.645534  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:07.637588   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.638151   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.639755   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.640208   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.641673   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:07.637588   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.638151   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.639755   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.640208   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.641673   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:07.645544  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:07.645555  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:07.713688  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:07.713708  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:10.250054  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:10.260518  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:10.260577  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:10.287264  528268 cri.go:89] found id: ""
	I1206 10:38:10.287283  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.287291  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:10.287296  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:10.287358  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:10.312333  528268 cri.go:89] found id: ""
	I1206 10:38:10.312347  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.312355  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:10.312360  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:10.312420  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:10.336978  528268 cri.go:89] found id: ""
	I1206 10:38:10.336993  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.337000  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:10.337004  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:10.337069  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:10.363441  528268 cri.go:89] found id: ""
	I1206 10:38:10.363455  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.363463  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:10.363468  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:10.363526  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:10.388225  528268 cri.go:89] found id: ""
	I1206 10:38:10.388245  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.388253  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:10.388259  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:10.388320  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:10.414362  528268 cri.go:89] found id: ""
	I1206 10:38:10.414375  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.414382  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:10.414388  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:10.414445  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:10.454478  528268 cri.go:89] found id: ""
	I1206 10:38:10.454491  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.454499  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:10.454508  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:10.454518  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:10.524830  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:10.524851  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:10.540277  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:10.540292  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:10.607931  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:10.599410   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.600137   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.601764   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.602052   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.604157   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:10.599410   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.600137   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.601764   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.602052   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.604157   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:10.607942  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:10.607955  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:10.675104  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:10.675134  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:13.206837  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:13.217943  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:13.218002  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:13.243670  528268 cri.go:89] found id: ""
	I1206 10:38:13.243684  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.243691  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:13.243697  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:13.243758  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:13.268428  528268 cri.go:89] found id: ""
	I1206 10:38:13.268443  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.268450  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:13.268455  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:13.268512  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:13.294024  528268 cri.go:89] found id: ""
	I1206 10:38:13.294038  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.294045  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:13.294050  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:13.294106  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:13.321522  528268 cri.go:89] found id: ""
	I1206 10:38:13.321536  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.321543  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:13.321548  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:13.321610  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:13.351214  528268 cri.go:89] found id: ""
	I1206 10:38:13.351228  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.351235  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:13.351240  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:13.351299  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:13.376433  528268 cri.go:89] found id: ""
	I1206 10:38:13.376447  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.376454  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:13.376459  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:13.376520  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:13.405980  528268 cri.go:89] found id: ""
	I1206 10:38:13.405994  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.406001  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:13.406009  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:13.406019  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:13.481314  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:13.481334  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:13.503361  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:13.503378  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:13.570756  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:13.562069   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.562777   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.564575   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.565306   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.566790   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:13.562069   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.562777   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.564575   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.565306   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.566790   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:13.570765  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:13.570778  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:13.641258  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:13.641282  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:16.171913  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:16.182483  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:16.182545  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:16.210129  528268 cri.go:89] found id: ""
	I1206 10:38:16.210143  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.210151  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:16.210156  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:16.210217  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:16.237040  528268 cri.go:89] found id: ""
	I1206 10:38:16.237060  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.237067  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:16.237073  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:16.237134  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:16.263801  528268 cri.go:89] found id: ""
	I1206 10:38:16.263815  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.263822  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:16.263827  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:16.263886  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:16.289263  528268 cri.go:89] found id: ""
	I1206 10:38:16.289277  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.289284  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:16.289289  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:16.289347  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:16.317849  528268 cri.go:89] found id: ""
	I1206 10:38:16.317862  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.317870  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:16.317875  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:16.317933  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:16.347303  528268 cri.go:89] found id: ""
	I1206 10:38:16.347317  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.347324  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:16.347329  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:16.347387  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:16.373512  528268 cri.go:89] found id: ""
	I1206 10:38:16.373525  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.373542  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:16.373552  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:16.373568  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:16.438751  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:16.438769  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:16.455447  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:16.455463  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:16.527176  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:16.518992   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.519800   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.521522   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.522056   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.523116   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:16.518992   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.519800   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.521522   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.522056   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.523116   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:16.527186  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:16.527196  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:16.595033  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:16.595053  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:19.127162  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:19.137626  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:19.137685  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:19.168715  528268 cri.go:89] found id: ""
	I1206 10:38:19.168729  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.168736  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:19.168741  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:19.168798  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:19.199324  528268 cri.go:89] found id: ""
	I1206 10:38:19.199341  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.199354  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:19.199359  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:19.199418  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:19.225589  528268 cri.go:89] found id: ""
	I1206 10:38:19.225601  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.225608  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:19.225613  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:19.225670  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:19.251399  528268 cri.go:89] found id: ""
	I1206 10:38:19.251412  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.251420  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:19.251425  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:19.251488  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:19.276108  528268 cri.go:89] found id: ""
	I1206 10:38:19.276122  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.276129  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:19.276134  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:19.276193  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:19.301269  528268 cri.go:89] found id: ""
	I1206 10:38:19.301282  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.301290  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:19.301295  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:19.301352  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:19.327537  528268 cri.go:89] found id: ""
	I1206 10:38:19.327552  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.327559  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:19.327568  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:19.327578  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:19.398088  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:19.398114  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:19.413590  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:19.413609  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:19.517843  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:19.509322   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.509746   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.511448   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.511962   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.513543   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:19.509322   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.509746   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.511448   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.511962   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.513543   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:19.517853  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:19.517866  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:19.587464  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:19.587485  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:22.115984  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:22.126048  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:22.126111  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:22.152880  528268 cri.go:89] found id: ""
	I1206 10:38:22.152893  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.152900  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:22.152905  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:22.152961  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:22.179175  528268 cri.go:89] found id: ""
	I1206 10:38:22.179190  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.179197  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:22.179202  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:22.179263  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:22.204543  528268 cri.go:89] found id: ""
	I1206 10:38:22.204557  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.204565  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:22.204570  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:22.204631  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:22.229269  528268 cri.go:89] found id: ""
	I1206 10:38:22.229283  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.229291  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:22.229296  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:22.229353  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:22.255404  528268 cri.go:89] found id: ""
	I1206 10:38:22.255418  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.255425  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:22.255430  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:22.255488  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:22.280965  528268 cri.go:89] found id: ""
	I1206 10:38:22.280981  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.280988  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:22.280994  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:22.281052  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:22.309901  528268 cri.go:89] found id: ""
	I1206 10:38:22.309915  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.309922  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:22.309930  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:22.309940  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:22.382110  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:22.382130  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:22.412045  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:22.412060  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:22.485902  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:22.485921  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:22.501637  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:22.501655  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:22.572937  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:22.565172   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.565547   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.567025   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.567515   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.569137   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:22.565172   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.565547   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.567025   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.567515   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.569137   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:25.074598  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:25.085017  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:25.085084  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:25.110479  528268 cri.go:89] found id: ""
	I1206 10:38:25.110493  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.110500  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:25.110506  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:25.110566  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:25.137467  528268 cri.go:89] found id: ""
	I1206 10:38:25.137481  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.137488  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:25.137493  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:25.137552  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:25.163017  528268 cri.go:89] found id: ""
	I1206 10:38:25.163033  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.163040  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:25.163046  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:25.163105  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:25.193876  528268 cri.go:89] found id: ""
	I1206 10:38:25.193890  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.193898  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:25.193903  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:25.193966  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:25.220362  528268 cri.go:89] found id: ""
	I1206 10:38:25.220376  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.220383  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:25.220388  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:25.220444  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:25.246057  528268 cri.go:89] found id: ""
	I1206 10:38:25.246070  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.246078  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:25.246083  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:25.246140  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:25.273646  528268 cri.go:89] found id: ""
	I1206 10:38:25.273660  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.273667  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:25.273675  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:25.273691  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:25.341507  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:25.341527  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:25.356890  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:25.356906  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:25.432607  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:25.423528   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.424336   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.425943   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.426718   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.428396   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:25.423528   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.424336   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.425943   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.426718   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.428396   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:25.432617  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:25.432628  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:25.515030  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:25.515052  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:28.053670  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:28.064577  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:28.064641  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:28.091082  528268 cri.go:89] found id: ""
	I1206 10:38:28.091097  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.091106  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:28.091111  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:28.091205  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:28.116793  528268 cri.go:89] found id: ""
	I1206 10:38:28.116808  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.116815  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:28.116822  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:28.116881  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:28.145938  528268 cri.go:89] found id: ""
	I1206 10:38:28.145952  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.145960  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:28.145965  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:28.146025  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:28.171742  528268 cri.go:89] found id: ""
	I1206 10:38:28.171755  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.171763  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:28.171768  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:28.171826  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:28.197528  528268 cri.go:89] found id: ""
	I1206 10:38:28.197542  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.197549  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:28.197554  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:28.197613  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:28.224277  528268 cri.go:89] found id: ""
	I1206 10:38:28.224291  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.224298  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:28.224303  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:28.224368  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:28.252201  528268 cri.go:89] found id: ""
	I1206 10:38:28.252215  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.252223  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:28.252237  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:28.252248  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:28.284626  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:28.284642  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:28.351035  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:28.351055  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:28.366043  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:28.366061  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:28.437473  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:28.427946   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.428958   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.430082   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.430867   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.432711   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:28.427946   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.428958   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.430082   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.430867   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.432711   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:28.437483  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:28.437506  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:31.019982  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:31.030426  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:31.030488  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:31.055406  528268 cri.go:89] found id: ""
	I1206 10:38:31.055419  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.055427  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:31.055432  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:31.055490  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:31.081639  528268 cri.go:89] found id: ""
	I1206 10:38:31.081653  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.081660  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:31.081665  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:31.081729  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:31.111871  528268 cri.go:89] found id: ""
	I1206 10:38:31.111886  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.111894  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:31.111899  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:31.111959  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:31.142949  528268 cri.go:89] found id: ""
	I1206 10:38:31.142964  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.142971  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:31.142977  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:31.143042  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:31.169930  528268 cri.go:89] found id: ""
	I1206 10:38:31.169946  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.169954  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:31.169959  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:31.170020  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:31.196019  528268 cri.go:89] found id: ""
	I1206 10:38:31.196033  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.196041  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:31.196046  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:31.196104  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:31.226526  528268 cri.go:89] found id: ""
	I1206 10:38:31.226540  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.226547  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:31.226556  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:31.226567  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:31.289723  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:31.280542   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.281325   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.283214   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.283972   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.285734   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:31.280542   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.281325   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.283214   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.283972   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.285734   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:31.289733  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:31.289746  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:31.358922  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:31.358941  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:31.387252  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:31.387268  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:31.460730  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:31.460749  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:33.977403  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:33.987866  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:33.987933  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:34.023637  528268 cri.go:89] found id: ""
	I1206 10:38:34.023651  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.023659  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:34.023664  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:34.023728  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:34.052242  528268 cri.go:89] found id: ""
	I1206 10:38:34.052256  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.052263  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:34.052269  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:34.052330  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:34.077707  528268 cri.go:89] found id: ""
	I1206 10:38:34.077721  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.077728  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:34.077734  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:34.077795  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:34.103066  528268 cri.go:89] found id: ""
	I1206 10:38:34.103079  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.103098  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:34.103103  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:34.103185  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:34.132994  528268 cri.go:89] found id: ""
	I1206 10:38:34.133007  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.133015  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:34.133020  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:34.133081  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:34.159017  528268 cri.go:89] found id: ""
	I1206 10:38:34.159030  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.159038  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:34.159043  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:34.159101  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:34.185998  528268 cri.go:89] found id: ""
	I1206 10:38:34.186012  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.186020  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:34.186028  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:34.186042  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:34.257644  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:34.257664  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:34.273073  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:34.273092  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:34.344235  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:34.334637   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.335521   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.337181   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.337760   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.339605   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:34.334637   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.335521   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.337181   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.337760   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.339605   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:34.344247  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:34.344260  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:34.414848  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:34.414867  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:36.966180  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:36.976392  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:36.976457  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:37.002549  528268 cri.go:89] found id: ""
	I1206 10:38:37.002566  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.002574  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:37.002580  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:37.002657  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:37.033009  528268 cri.go:89] found id: ""
	I1206 10:38:37.033024  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.033031  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:37.033037  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:37.033106  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:37.059257  528268 cri.go:89] found id: ""
	I1206 10:38:37.059271  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.059279  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:37.059285  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:37.059346  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:37.090436  528268 cri.go:89] found id: ""
	I1206 10:38:37.090449  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.090457  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:37.090462  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:37.090523  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:37.118194  528268 cri.go:89] found id: ""
	I1206 10:38:37.118208  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.118215  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:37.118222  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:37.118284  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:37.144022  528268 cri.go:89] found id: ""
	I1206 10:38:37.144036  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.144044  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:37.144049  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:37.144107  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:37.168416  528268 cri.go:89] found id: ""
	I1206 10:38:37.168430  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.168438  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:37.168445  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:37.168456  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:37.234878  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:37.234898  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:37.250351  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:37.250374  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:37.316139  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:37.307238   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.308163   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.309976   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.310399   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.312153   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:37.307238   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.308163   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.309976   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.310399   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.312153   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:37.316149  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:37.316159  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:37.385780  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:37.385800  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:39.916327  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:39.926345  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:39.926412  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:39.953639  528268 cri.go:89] found id: ""
	I1206 10:38:39.953652  528268 logs.go:282] 0 containers: []
	W1206 10:38:39.953660  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:39.953671  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:39.953732  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:39.979049  528268 cri.go:89] found id: ""
	I1206 10:38:39.979064  528268 logs.go:282] 0 containers: []
	W1206 10:38:39.979072  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:39.979077  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:39.979164  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:40.013684  528268 cri.go:89] found id: ""
	I1206 10:38:40.013700  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.013708  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:40.013714  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:40.013783  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:40.052804  528268 cri.go:89] found id: ""
	I1206 10:38:40.052820  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.052828  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:40.052834  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:40.052902  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:40.084356  528268 cri.go:89] found id: ""
	I1206 10:38:40.084372  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.084380  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:40.084386  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:40.084451  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:40.112282  528268 cri.go:89] found id: ""
	I1206 10:38:40.112297  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.112304  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:40.112312  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:40.112373  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:40.140065  528268 cri.go:89] found id: ""
	I1206 10:38:40.140080  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.140087  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:40.140094  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:40.140108  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:40.208521  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:40.199450   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.200296   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.202102   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.202795   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.204574   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:40.199450   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.200296   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.202102   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.202795   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.204574   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:40.208530  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:40.208541  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:40.280105  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:40.280126  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:40.313393  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:40.313409  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:40.380769  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:40.380789  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:42.896735  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:42.906913  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:42.906971  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:42.932466  528268 cri.go:89] found id: ""
	I1206 10:38:42.932480  528268 logs.go:282] 0 containers: []
	W1206 10:38:42.932493  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:42.932499  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:42.932560  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:42.962618  528268 cri.go:89] found id: ""
	I1206 10:38:42.962633  528268 logs.go:282] 0 containers: []
	W1206 10:38:42.962641  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:42.962647  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:42.962704  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:42.989497  528268 cri.go:89] found id: ""
	I1206 10:38:42.989511  528268 logs.go:282] 0 containers: []
	W1206 10:38:42.989519  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:42.989525  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:42.989581  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:43.016798  528268 cri.go:89] found id: ""
	I1206 10:38:43.016818  528268 logs.go:282] 0 containers: []
	W1206 10:38:43.016825  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:43.016831  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:43.017042  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:43.044571  528268 cri.go:89] found id: ""
	I1206 10:38:43.044589  528268 logs.go:282] 0 containers: []
	W1206 10:38:43.044599  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:43.044606  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:43.044679  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:43.072240  528268 cri.go:89] found id: ""
	I1206 10:38:43.072256  528268 logs.go:282] 0 containers: []
	W1206 10:38:43.072264  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:43.072269  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:43.072330  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:43.098196  528268 cri.go:89] found id: ""
	I1206 10:38:43.098211  528268 logs.go:282] 0 containers: []
	W1206 10:38:43.098218  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:43.098225  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:43.098237  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:43.113559  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:43.113577  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:43.177585  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:43.169460   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.169877   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.171569   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.172135   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.173643   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:43.169460   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.169877   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.171569   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.172135   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.173643   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:43.177595  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:43.177606  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:43.251189  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:43.251210  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:43.278658  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:43.278673  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:45.849509  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:45.861204  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:45.861266  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:45.888209  528268 cri.go:89] found id: ""
	I1206 10:38:45.888228  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.888236  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:45.888241  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:45.888306  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:45.913344  528268 cri.go:89] found id: ""
	I1206 10:38:45.913357  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.913365  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:45.913370  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:45.913429  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:45.939830  528268 cri.go:89] found id: ""
	I1206 10:38:45.939844  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.939852  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:45.939857  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:45.939927  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:45.964893  528268 cri.go:89] found id: ""
	I1206 10:38:45.964907  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.964914  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:45.964920  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:45.964984  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:45.991528  528268 cri.go:89] found id: ""
	I1206 10:38:45.991540  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.991548  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:45.991553  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:45.991614  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:46.018162  528268 cri.go:89] found id: ""
	I1206 10:38:46.018176  528268 logs.go:282] 0 containers: []
	W1206 10:38:46.018184  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:46.018190  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:46.018249  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:46.045784  528268 cri.go:89] found id: ""
	I1206 10:38:46.045807  528268 logs.go:282] 0 containers: []
	W1206 10:38:46.045814  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:46.045822  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:46.045833  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:46.114786  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:46.105174   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.106040   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.107658   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.108307   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.110017   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:46.105174   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.106040   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.107658   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.108307   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.110017   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:46.114796  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:46.114808  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:46.185171  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:46.185193  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:46.213442  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:46.213458  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:46.280354  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:46.280374  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:48.796511  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:48.807012  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:48.807073  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:48.832313  528268 cri.go:89] found id: ""
	I1206 10:38:48.832337  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.832344  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:48.832349  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:48.832420  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:48.857914  528268 cri.go:89] found id: ""
	I1206 10:38:48.857928  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.857935  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:48.857940  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:48.858000  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:48.887721  528268 cri.go:89] found id: ""
	I1206 10:38:48.887735  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.887743  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:48.887748  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:48.887808  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:48.912329  528268 cri.go:89] found id: ""
	I1206 10:38:48.912343  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.912351  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:48.912356  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:48.912416  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:48.942323  528268 cri.go:89] found id: ""
	I1206 10:38:48.942337  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.942344  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:48.942349  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:48.942408  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:48.971776  528268 cri.go:89] found id: ""
	I1206 10:38:48.971790  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.971798  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:48.971803  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:48.971861  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:48.997054  528268 cri.go:89] found id: ""
	I1206 10:38:48.997068  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.997076  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:48.997084  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:48.997095  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:49.071387  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:49.071413  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:49.099724  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:49.099743  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:49.165471  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:49.165492  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:49.180707  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:49.180755  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:49.246459  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:49.238180   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.239038   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.240759   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.241079   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.242605   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:49.238180   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.239038   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.240759   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.241079   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.242605   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:51.747477  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:51.757424  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:51.757483  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:51.785368  528268 cri.go:89] found id: ""
	I1206 10:38:51.785382  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.785390  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:51.785395  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:51.785452  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:51.814468  528268 cri.go:89] found id: ""
	I1206 10:38:51.814482  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.814489  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:51.814494  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:51.814553  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:51.839897  528268 cri.go:89] found id: ""
	I1206 10:38:51.839911  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.839918  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:51.839923  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:51.839980  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:51.865924  528268 cri.go:89] found id: ""
	I1206 10:38:51.865938  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.865951  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:51.865956  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:51.866011  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:51.891688  528268 cri.go:89] found id: ""
	I1206 10:38:51.891702  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.891709  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:51.891714  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:51.891772  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:51.917048  528268 cri.go:89] found id: ""
	I1206 10:38:51.917062  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.917070  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:51.917075  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:51.917132  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:51.942873  528268 cri.go:89] found id: ""
	I1206 10:38:51.942888  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.942895  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:51.942903  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:51.942914  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:52.011199  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:52.001318   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.002485   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.003254   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.005112   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.005720   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:52.001318   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.002485   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.003254   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.005112   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.005720   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:52.011209  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:52.011220  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:52.085464  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:52.085485  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:52.119213  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:52.119230  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:52.189731  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:52.189751  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:54.705436  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:54.717135  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:54.717196  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:54.755081  528268 cri.go:89] found id: ""
	I1206 10:38:54.755095  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.755105  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:54.755110  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:54.755209  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:54.780971  528268 cri.go:89] found id: ""
	I1206 10:38:54.780985  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.780993  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:54.780998  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:54.781060  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:54.806877  528268 cri.go:89] found id: ""
	I1206 10:38:54.806891  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.806898  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:54.806904  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:54.806967  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:54.832627  528268 cri.go:89] found id: ""
	I1206 10:38:54.832641  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.832649  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:54.832654  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:54.832711  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:54.857814  528268 cri.go:89] found id: ""
	I1206 10:38:54.857828  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.857836  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:54.857841  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:54.857897  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:54.883738  528268 cri.go:89] found id: ""
	I1206 10:38:54.883752  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.883759  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:54.883764  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:54.883821  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:54.909479  528268 cri.go:89] found id: ""
	I1206 10:38:54.909493  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.909500  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:54.909508  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:54.909519  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:54.975629  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:54.975651  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:54.991150  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:54.991166  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:55.064619  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:55.054168   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.054825   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.058121   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.058810   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.060748   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:55.054168   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.054825   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.058121   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.058810   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.060748   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:55.064628  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:55.064639  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:55.134387  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:55.134406  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:57.664428  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:57.675264  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:57.675328  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:57.709021  528268 cri.go:89] found id: ""
	I1206 10:38:57.709035  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.709043  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:57.709048  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:57.709116  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:57.744132  528268 cri.go:89] found id: ""
	I1206 10:38:57.744146  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.744153  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:57.744159  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:57.744226  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:57.778746  528268 cri.go:89] found id: ""
	I1206 10:38:57.778760  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.778767  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:57.778772  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:57.778829  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:57.805263  528268 cri.go:89] found id: ""
	I1206 10:38:57.805276  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.805284  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:57.805289  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:57.805348  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:57.831152  528268 cri.go:89] found id: ""
	I1206 10:38:57.831166  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.831173  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:57.831178  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:57.831240  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:57.857097  528268 cri.go:89] found id: ""
	I1206 10:38:57.857111  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.857119  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:57.857124  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:57.857189  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:57.882945  528268 cri.go:89] found id: ""
	I1206 10:38:57.882984  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.882992  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:57.883000  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:57.883011  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:57.915176  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:57.915193  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:57.981939  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:57.981958  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:57.997358  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:57.997373  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:58.070527  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:58.061092   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.061631   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.063614   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.064325   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.065286   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:58.061092   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.061631   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.063614   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.064325   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.065286   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:58.070538  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:58.070549  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:00.641789  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:00.651800  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:00.651859  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:00.679593  528268 cri.go:89] found id: ""
	I1206 10:39:00.679606  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.679613  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:00.679618  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:00.679673  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:00.712252  528268 cri.go:89] found id: ""
	I1206 10:39:00.712266  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.712273  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:00.712278  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:00.712337  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:00.746867  528268 cri.go:89] found id: ""
	I1206 10:39:00.746881  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.746888  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:00.746894  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:00.746954  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:00.779153  528268 cri.go:89] found id: ""
	I1206 10:39:00.779167  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.779174  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:00.779180  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:00.779241  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:00.805143  528268 cri.go:89] found id: ""
	I1206 10:39:00.805157  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.805164  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:00.805170  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:00.805227  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:00.831339  528268 cri.go:89] found id: ""
	I1206 10:39:00.831353  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.831361  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:00.831368  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:00.831430  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:00.857571  528268 cri.go:89] found id: ""
	I1206 10:39:00.857585  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.857593  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:00.857600  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:00.857611  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:00.925179  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:00.917222   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.917610   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.919217   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.919688   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.921308   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:00.917222   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.917610   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.919217   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.919688   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.921308   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:00.925189  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:00.925200  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:00.994191  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:00.994210  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:01.029067  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:01.029085  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:01.100689  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:01.100709  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:03.616374  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:03.626603  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:03.626714  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:03.651732  528268 cri.go:89] found id: ""
	I1206 10:39:03.651746  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.651753  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:03.651758  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:03.651818  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:03.679359  528268 cri.go:89] found id: ""
	I1206 10:39:03.679373  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.679380  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:03.679385  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:03.679442  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:03.714610  528268 cri.go:89] found id: ""
	I1206 10:39:03.714624  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.714631  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:03.714636  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:03.714693  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:03.745765  528268 cri.go:89] found id: ""
	I1206 10:39:03.745780  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.745787  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:03.745792  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:03.745849  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:03.771225  528268 cri.go:89] found id: ""
	I1206 10:39:03.771239  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.771247  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:03.771252  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:03.771316  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:03.796796  528268 cri.go:89] found id: ""
	I1206 10:39:03.796853  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.796861  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:03.796867  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:03.796925  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:03.822839  528268 cri.go:89] found id: ""
	I1206 10:39:03.822853  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.822861  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:03.822878  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:03.822888  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:03.858844  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:03.858860  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:03.925683  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:03.925703  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:03.941280  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:03.941297  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:04.009034  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:03.997692   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:03.998374   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.001181   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.001673   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.003993   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:03.997692   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:03.998374   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.001181   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.001673   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.003993   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:04.009044  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:04.009055  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:06.582354  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:06.592267  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:06.592340  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:06.617889  528268 cri.go:89] found id: ""
	I1206 10:39:06.617902  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.617909  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:06.617915  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:06.617979  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:06.643951  528268 cri.go:89] found id: ""
	I1206 10:39:06.643966  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.643973  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:06.643978  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:06.644035  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:06.669753  528268 cri.go:89] found id: ""
	I1206 10:39:06.669767  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.669774  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:06.669779  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:06.669839  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:06.701353  528268 cri.go:89] found id: ""
	I1206 10:39:06.701373  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.701380  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:06.701386  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:06.701445  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:06.751930  528268 cri.go:89] found id: ""
	I1206 10:39:06.751944  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.751952  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:06.751956  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:06.752019  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:06.778713  528268 cri.go:89] found id: ""
	I1206 10:39:06.778727  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.778734  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:06.778741  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:06.778802  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:06.804251  528268 cri.go:89] found id: ""
	I1206 10:39:06.804265  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.804273  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:06.804280  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:06.804290  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:06.871350  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:06.871368  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:06.885942  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:06.885960  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:06.959058  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:06.950158   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.951219   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.951835   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.953474   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.954070   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:06.950158   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.951219   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.951835   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.953474   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.954070   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:06.959068  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:06.959081  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:07.030114  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:07.030135  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:09.559397  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:09.569971  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:09.570039  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:09.595039  528268 cri.go:89] found id: ""
	I1206 10:39:09.595052  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.595059  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:09.595065  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:09.595152  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:09.621113  528268 cri.go:89] found id: ""
	I1206 10:39:09.621127  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.621135  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:09.621140  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:09.621203  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:09.651003  528268 cri.go:89] found id: ""
	I1206 10:39:09.651016  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.651024  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:09.651029  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:09.651087  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:09.677104  528268 cri.go:89] found id: ""
	I1206 10:39:09.677118  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.677125  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:09.677131  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:09.677187  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:09.713565  528268 cri.go:89] found id: ""
	I1206 10:39:09.713579  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.713587  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:09.713592  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:09.713653  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:09.741915  528268 cri.go:89] found id: ""
	I1206 10:39:09.741928  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.741935  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:09.741941  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:09.741997  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:09.774013  528268 cri.go:89] found id: ""
	I1206 10:39:09.774027  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.774035  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:09.774042  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:09.774054  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:09.840091  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:09.840113  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:09.855657  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:09.855675  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:09.919867  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:09.911210   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.911783   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.913473   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.914124   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.915891   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:09.911210   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.911783   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.913473   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.914124   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.915891   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:09.919877  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:09.919901  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:09.991592  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:09.991613  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:12.526559  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:12.537148  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:12.537208  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:12.570214  528268 cri.go:89] found id: ""
	I1206 10:39:12.570228  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.570235  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:12.570241  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:12.570299  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:12.595309  528268 cri.go:89] found id: ""
	I1206 10:39:12.595324  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.595331  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:12.595342  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:12.595401  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:12.620408  528268 cri.go:89] found id: ""
	I1206 10:39:12.620422  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.620429  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:12.620434  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:12.620495  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:12.645606  528268 cri.go:89] found id: ""
	I1206 10:39:12.645621  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.645628  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:12.645644  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:12.645700  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:12.672105  528268 cri.go:89] found id: ""
	I1206 10:39:12.672119  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.672126  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:12.672132  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:12.672191  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:12.699949  528268 cri.go:89] found id: ""
	I1206 10:39:12.699964  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.699971  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:12.699976  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:12.700038  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:12.730867  528268 cri.go:89] found id: ""
	I1206 10:39:12.730881  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.730888  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:12.730896  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:12.730907  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:12.760666  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:12.760682  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:12.827918  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:12.827939  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:12.845229  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:12.845250  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:12.913571  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:12.905225   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.906413   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.907377   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.908192   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.909739   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:12.905225   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.906413   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.907377   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.908192   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.909739   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:12.913582  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:12.913606  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:15.486285  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:15.496339  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:15.496397  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:15.522751  528268 cri.go:89] found id: ""
	I1206 10:39:15.522765  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.522773  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:15.522782  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:15.522842  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:15.548733  528268 cri.go:89] found id: ""
	I1206 10:39:15.548747  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.548760  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:15.548765  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:15.548823  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:15.574392  528268 cri.go:89] found id: ""
	I1206 10:39:15.574406  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.574413  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:15.574418  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:15.574475  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:15.600281  528268 cri.go:89] found id: ""
	I1206 10:39:15.600297  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.600311  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:15.600316  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:15.600376  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:15.626469  528268 cri.go:89] found id: ""
	I1206 10:39:15.626482  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.626490  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:15.626496  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:15.626561  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:15.652394  528268 cri.go:89] found id: ""
	I1206 10:39:15.652407  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.652414  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:15.652420  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:15.652477  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:15.679527  528268 cri.go:89] found id: ""
	I1206 10:39:15.679540  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.679553  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:15.679561  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:15.679571  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:15.764342  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:15.764363  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:15.798376  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:15.798394  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:15.868665  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:15.868685  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:15.883983  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:15.883999  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:15.952342  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:15.944348   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.945157   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.946732   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.947077   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.948583   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:15.944348   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.945157   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.946732   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.947077   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.948583   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:18.453493  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:18.463876  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:18.463935  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:18.490209  528268 cri.go:89] found id: ""
	I1206 10:39:18.490224  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.490231  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:18.490236  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:18.490294  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:18.516967  528268 cri.go:89] found id: ""
	I1206 10:39:18.516981  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.516988  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:18.516993  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:18.517054  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:18.546169  528268 cri.go:89] found id: ""
	I1206 10:39:18.546182  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.546189  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:18.546194  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:18.546253  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:18.571307  528268 cri.go:89] found id: ""
	I1206 10:39:18.571320  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.571327  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:18.571333  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:18.571391  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:18.596842  528268 cri.go:89] found id: ""
	I1206 10:39:18.596856  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.596863  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:18.596868  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:18.596924  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:18.622545  528268 cri.go:89] found id: ""
	I1206 10:39:18.622559  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.622566  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:18.622571  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:18.622628  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:18.647866  528268 cri.go:89] found id: ""
	I1206 10:39:18.647879  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.647886  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:18.647894  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:18.647904  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:18.722841  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:18.722867  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:18.738489  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:18.738506  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:18.804503  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:18.796653   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.797155   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.798686   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.799110   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.800626   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:18.796653   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.797155   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.798686   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.799110   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.800626   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:18.804514  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:18.804527  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:18.873502  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:18.873520  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:21.404064  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:21.414555  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:21.414615  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:21.439357  528268 cri.go:89] found id: ""
	I1206 10:39:21.439371  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.439378  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:21.439384  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:21.439444  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:21.464257  528268 cri.go:89] found id: ""
	I1206 10:39:21.464270  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.464278  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:21.464283  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:21.464342  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:21.489051  528268 cri.go:89] found id: ""
	I1206 10:39:21.489065  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.489072  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:21.489077  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:21.489133  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:21.514898  528268 cri.go:89] found id: ""
	I1206 10:39:21.514912  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.514919  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:21.514930  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:21.514988  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:21.540268  528268 cri.go:89] found id: ""
	I1206 10:39:21.540283  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.540290  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:21.540296  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:21.540361  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:21.564943  528268 cri.go:89] found id: ""
	I1206 10:39:21.564957  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.564965  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:21.564970  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:21.565031  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:21.590819  528268 cri.go:89] found id: ""
	I1206 10:39:21.590833  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.590840  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:21.590848  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:21.590858  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:21.656247  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:21.647267   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.648092   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.649642   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.650214   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.652120   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:21.647267   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.648092   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.649642   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.650214   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.652120   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:21.656258  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:21.656268  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:21.726649  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:21.726669  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:21.757883  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:21.757900  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:21.827592  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:21.827612  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:24.344952  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:24.355567  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:24.355629  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:24.381792  528268 cri.go:89] found id: ""
	I1206 10:39:24.381806  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.381814  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:24.381819  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:24.381880  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:24.406752  528268 cri.go:89] found id: ""
	I1206 10:39:24.406766  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.406773  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:24.406779  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:24.406837  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:24.435444  528268 cri.go:89] found id: ""
	I1206 10:39:24.435458  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.435466  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:24.435471  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:24.435537  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:24.460261  528268 cri.go:89] found id: ""
	I1206 10:39:24.460275  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.460282  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:24.460287  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:24.460344  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:24.485676  528268 cri.go:89] found id: ""
	I1206 10:39:24.485689  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.485697  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:24.485702  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:24.485758  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:24.515674  528268 cri.go:89] found id: ""
	I1206 10:39:24.515689  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.515696  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:24.515702  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:24.515759  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:24.540533  528268 cri.go:89] found id: ""
	I1206 10:39:24.540547  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.540555  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:24.540563  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:24.540573  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:24.607514  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:24.607536  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:24.622495  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:24.622512  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:24.688734  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:24.679787   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.680616   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.681733   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.682450   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.684164   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:24.679787   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.680616   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.681733   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.682450   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.684164   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:24.688745  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:24.688755  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:24.767851  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:24.767871  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:27.298384  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:27.308520  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:27.308577  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:27.337406  528268 cri.go:89] found id: ""
	I1206 10:39:27.337421  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.337429  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:27.337434  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:27.337492  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:27.363616  528268 cri.go:89] found id: ""
	I1206 10:39:27.363630  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.363637  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:27.363643  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:27.363700  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:27.387807  528268 cri.go:89] found id: ""
	I1206 10:39:27.387821  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.387828  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:27.387833  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:27.387892  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:27.417047  528268 cri.go:89] found id: ""
	I1206 10:39:27.417061  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.417068  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:27.417076  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:27.417135  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:27.443034  528268 cri.go:89] found id: ""
	I1206 10:39:27.443047  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.443055  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:27.443060  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:27.443156  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:27.469276  528268 cri.go:89] found id: ""
	I1206 10:39:27.469289  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.469297  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:27.469302  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:27.469361  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:27.494605  528268 cri.go:89] found id: ""
	I1206 10:39:27.494619  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.494626  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:27.494634  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:27.494681  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:27.522899  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:27.522916  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:27.593447  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:27.593467  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:27.608920  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:27.608937  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:27.673774  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:27.665376   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.666067   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.667656   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.668260   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.669814   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:27.665376   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.666067   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.667656   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.668260   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.669814   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:27.673784  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:27.673795  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:30.246836  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:30.257118  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:30.257181  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:30.285905  528268 cri.go:89] found id: ""
	I1206 10:39:30.285918  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.285926  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:30.285931  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:30.285991  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:30.312233  528268 cri.go:89] found id: ""
	I1206 10:39:30.312247  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.312254  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:30.312259  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:30.312320  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:30.342032  528268 cri.go:89] found id: ""
	I1206 10:39:30.342047  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.342061  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:30.342066  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:30.342127  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:30.371021  528268 cri.go:89] found id: ""
	I1206 10:39:30.371051  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.371059  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:30.371064  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:30.371145  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:30.397540  528268 cri.go:89] found id: ""
	I1206 10:39:30.397554  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.397561  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:30.397566  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:30.397625  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:30.424004  528268 cri.go:89] found id: ""
	I1206 10:39:30.424018  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.424026  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:30.424033  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:30.424090  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:30.450313  528268 cri.go:89] found id: ""
	I1206 10:39:30.450327  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.450335  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:30.450342  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:30.450352  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:30.516474  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:30.516493  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:30.532143  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:30.532160  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:30.595585  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:30.587952   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.588400   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.589883   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.590195   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.591620   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:30.587952   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.588400   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.589883   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.590195   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.591620   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:30.595595  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:30.595606  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:30.664167  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:30.664186  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:33.200924  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:33.211672  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:33.211735  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:33.237137  528268 cri.go:89] found id: ""
	I1206 10:39:33.237151  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.237159  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:33.237165  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:33.237265  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:33.263318  528268 cri.go:89] found id: ""
	I1206 10:39:33.263332  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.263339  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:33.263345  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:33.263403  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:33.292810  528268 cri.go:89] found id: ""
	I1206 10:39:33.292824  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.292832  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:33.292837  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:33.292902  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:33.322280  528268 cri.go:89] found id: ""
	I1206 10:39:33.322294  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.322302  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:33.322307  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:33.322371  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:33.347371  528268 cri.go:89] found id: ""
	I1206 10:39:33.347384  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.347391  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:33.347397  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:33.347454  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:33.373452  528268 cri.go:89] found id: ""
	I1206 10:39:33.373465  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.373473  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:33.373478  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:33.373536  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:33.398875  528268 cri.go:89] found id: ""
	I1206 10:39:33.398895  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.398902  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:33.398910  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:33.398921  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:33.465783  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:33.465803  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:33.480960  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:33.480977  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:33.548139  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:33.539389   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.540163   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.541972   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.542561   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.544286   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:33.539389   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.540163   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.541972   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.542561   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.544286   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:33.548148  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:33.548158  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:33.617390  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:33.617412  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:36.152703  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:36.162988  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:36.163052  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:36.188586  528268 cri.go:89] found id: ""
	I1206 10:39:36.188599  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.188607  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:36.188611  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:36.188670  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:36.213361  528268 cri.go:89] found id: ""
	I1206 10:39:36.213374  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.213383  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:36.213388  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:36.213445  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:36.239271  528268 cri.go:89] found id: ""
	I1206 10:39:36.239285  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.239292  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:36.239297  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:36.239357  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:36.265679  528268 cri.go:89] found id: ""
	I1206 10:39:36.265695  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.265702  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:36.265707  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:36.265766  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:36.295654  528268 cri.go:89] found id: ""
	I1206 10:39:36.295668  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.295675  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:36.295681  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:36.295739  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:36.323853  528268 cri.go:89] found id: ""
	I1206 10:39:36.323874  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.323881  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:36.323887  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:36.323950  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:36.348624  528268 cri.go:89] found id: ""
	I1206 10:39:36.348639  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.348646  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:36.348654  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:36.348665  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:36.363245  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:36.363261  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:36.427550  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:36.419105   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.419825   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.421548   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.422073   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.423577   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:36.419105   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.419825   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.421548   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.422073   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.423577   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:36.427562  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:36.427573  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:36.495925  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:36.495943  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:36.524935  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:36.524952  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:39.092735  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:39.102812  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:39.102870  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:39.129292  528268 cri.go:89] found id: ""
	I1206 10:39:39.129306  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.129313  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:39.129318  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:39.129374  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:39.158470  528268 cri.go:89] found id: ""
	I1206 10:39:39.158484  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.158491  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:39.158496  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:39.158555  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:39.184281  528268 cri.go:89] found id: ""
	I1206 10:39:39.184295  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.184303  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:39.184308  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:39.184371  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:39.213800  528268 cri.go:89] found id: ""
	I1206 10:39:39.213813  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.213820  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:39.213825  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:39.213879  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:39.239313  528268 cri.go:89] found id: ""
	I1206 10:39:39.239327  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.239334  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:39.239339  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:39.239399  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:39.266416  528268 cri.go:89] found id: ""
	I1206 10:39:39.266429  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.266436  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:39.266442  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:39.266497  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:39.291512  528268 cri.go:89] found id: ""
	I1206 10:39:39.291526  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.291533  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:39.291541  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:39.291552  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:39.357396  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:39.357414  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:39.372532  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:39.372549  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:39.435924  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:39.427398   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.428323   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.429997   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.430495   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.432094   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:39.427398   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.428323   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.429997   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.430495   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.432094   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:39.435935  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:39.435946  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:39.504162  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:39.504182  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:42.034738  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:42.045722  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:42.045786  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:42.075972  528268 cri.go:89] found id: ""
	I1206 10:39:42.075988  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.075998  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:42.076004  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:42.076071  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:42.111989  528268 cri.go:89] found id: ""
	I1206 10:39:42.112018  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.112042  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:42.112048  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:42.112124  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:42.147538  528268 cri.go:89] found id: ""
	I1206 10:39:42.147562  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.147571  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:42.147577  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:42.147654  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:42.177982  528268 cri.go:89] found id: ""
	I1206 10:39:42.177999  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.178009  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:42.178016  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:42.178090  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:42.209844  528268 cri.go:89] found id: ""
	I1206 10:39:42.209860  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.209868  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:42.209874  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:42.209966  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:42.266057  528268 cri.go:89] found id: ""
	I1206 10:39:42.266071  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.266079  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:42.266085  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:42.266153  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:42.298140  528268 cri.go:89] found id: ""
	I1206 10:39:42.298154  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.298162  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:42.298184  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:42.298197  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:42.330034  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:42.330051  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:42.396938  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:42.396958  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:42.412056  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:42.412077  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:42.481304  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:42.470939   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.471731   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.473286   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.475758   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.476402   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:42.470939   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.471731   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.473286   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.475758   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.476402   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:42.481314  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:42.481326  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:45.054765  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:45.080943  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:45.081023  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:45.141872  528268 cri.go:89] found id: ""
	I1206 10:39:45.141889  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.141898  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:45.141904  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:45.141970  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:45.187818  528268 cri.go:89] found id: ""
	I1206 10:39:45.187838  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.187846  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:45.187854  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:45.187928  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:45.231785  528268 cri.go:89] found id: ""
	I1206 10:39:45.231815  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.231846  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:45.231853  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:45.232001  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:45.271976  528268 cri.go:89] found id: ""
	I1206 10:39:45.272000  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.272007  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:45.272020  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:45.272144  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:45.309755  528268 cri.go:89] found id: ""
	I1206 10:39:45.309770  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.309778  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:45.309784  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:45.309859  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:45.337077  528268 cri.go:89] found id: ""
	I1206 10:39:45.337091  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.337098  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:45.337104  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:45.337161  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:45.363255  528268 cri.go:89] found id: ""
	I1206 10:39:45.363269  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.363277  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:45.363285  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:45.363295  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:45.430326  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:45.430345  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:45.445222  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:45.445239  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:45.514305  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:45.503694   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.504527   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.507399   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.508008   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.509816   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:45.503694   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.504527   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.507399   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.508008   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.509816   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:45.514315  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:45.514351  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:45.586673  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:45.586702  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:48.117880  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:48.128191  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:48.128261  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:48.153898  528268 cri.go:89] found id: ""
	I1206 10:39:48.153912  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.153919  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:48.153924  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:48.153986  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:48.179947  528268 cri.go:89] found id: ""
	I1206 10:39:48.179960  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.179968  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:48.179973  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:48.180032  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:48.206970  528268 cri.go:89] found id: ""
	I1206 10:39:48.206984  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.206992  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:48.206997  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:48.207056  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:48.232490  528268 cri.go:89] found id: ""
	I1206 10:39:48.232504  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.232511  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:48.232516  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:48.232574  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:48.261888  528268 cri.go:89] found id: ""
	I1206 10:39:48.261902  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.261909  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:48.261915  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:48.261970  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:48.287239  528268 cri.go:89] found id: ""
	I1206 10:39:48.287259  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.287266  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:48.287271  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:48.287327  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:48.312701  528268 cri.go:89] found id: ""
	I1206 10:39:48.312716  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.312723  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:48.312730  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:48.312741  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:48.379854  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:48.379873  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:48.395027  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:48.395043  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:48.467966  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:48.459014   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.459732   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.460649   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.462199   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.462576   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:48.459014   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.459732   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.460649   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.462199   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.462576   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:48.467977  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:48.467999  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:48.537326  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:48.537347  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:51.077353  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:51.088357  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:51.088422  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:51.113964  528268 cri.go:89] found id: ""
	I1206 10:39:51.113978  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.113986  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:51.113991  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:51.114048  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:51.141966  528268 cri.go:89] found id: ""
	I1206 10:39:51.141981  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.141989  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:51.141994  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:51.142065  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:51.170585  528268 cri.go:89] found id: ""
	I1206 10:39:51.170599  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.170607  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:51.170612  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:51.170670  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:51.196958  528268 cri.go:89] found id: ""
	I1206 10:39:51.196972  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.196980  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:51.196985  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:51.197045  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:51.222240  528268 cri.go:89] found id: ""
	I1206 10:39:51.222255  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.222262  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:51.222267  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:51.222328  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:51.248023  528268 cri.go:89] found id: ""
	I1206 10:39:51.248038  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.248045  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:51.248051  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:51.248110  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:51.276094  528268 cri.go:89] found id: ""
	I1206 10:39:51.276108  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.276115  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:51.276122  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:51.276132  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:51.342420  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:51.342443  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:51.357018  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:51.357034  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:51.423986  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:51.415814   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.416564   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.418096   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.418402   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.419900   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:51.415814   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.416564   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.418096   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.418402   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.419900   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:51.423996  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:51.424007  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:51.493620  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:51.493640  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:54.023829  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:54.034889  528268 kubeadm.go:602] duration metric: took 4m2.326619845s to restartPrimaryControlPlane
	W1206 10:39:54.034955  528268 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 10:39:54.035078  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:39:54.453084  528268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:39:54.466906  528268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:39:54.474624  528268 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:39:54.474678  528268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:39:54.482552  528268 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:39:54.482562  528268 kubeadm.go:158] found existing configuration files:
	
	I1206 10:39:54.482612  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:39:54.490238  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:39:54.490301  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:39:54.497760  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:39:54.505776  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:39:54.505840  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:39:54.513397  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:39:54.521456  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:39:54.521517  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:39:54.529274  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:39:54.537105  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:39:54.537161  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:39:54.544719  528268 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:39:54.584997  528268 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:39:54.585045  528268 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:39:54.652750  528268 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:39:54.652815  528268 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:39:54.652850  528268 kubeadm.go:319] OS: Linux
	I1206 10:39:54.652893  528268 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:39:54.652940  528268 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:39:54.652986  528268 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:39:54.653033  528268 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:39:54.653079  528268 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:39:54.653126  528268 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:39:54.653171  528268 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:39:54.653217  528268 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:39:54.653262  528268 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:39:54.728791  528268 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:39:54.728901  528268 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:39:54.729018  528268 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:39:54.737647  528268 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:39:54.741159  528268 out.go:252]   - Generating certificates and keys ...
	I1206 10:39:54.741265  528268 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:39:54.741337  528268 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:39:54.741433  528268 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:39:54.741505  528268 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:39:54.741585  528268 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:39:54.741651  528268 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:39:54.741743  528268 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:39:54.741813  528268 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:39:54.741895  528268 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:39:54.741991  528268 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:39:54.742045  528268 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:39:54.742113  528268 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:39:55.375743  528268 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:39:55.444664  528268 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:39:55.561708  528268 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:39:55.802678  528268 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:39:55.992428  528268 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:39:55.993134  528268 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:39:55.995941  528268 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:39:55.999335  528268 out.go:252]   - Booting up control plane ...
	I1206 10:39:55.999434  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:39:55.999507  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:39:55.999569  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:39:56.016567  528268 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:39:56.016688  528268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:39:56.025029  528268 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:39:56.025345  528268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:39:56.025411  528268 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:39:56.167783  528268 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:39:56.167896  528268 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:43:56.165890  528268 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000163749s
	I1206 10:43:56.165916  528268 kubeadm.go:319] 
	I1206 10:43:56.165973  528268 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:43:56.166007  528268 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:43:56.166124  528268 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:43:56.166130  528268 kubeadm.go:319] 
	I1206 10:43:56.166237  528268 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:43:56.166298  528268 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:43:56.166345  528268 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:43:56.166349  528268 kubeadm.go:319] 
	I1206 10:43:56.171451  528268 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:43:56.171899  528268 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:43:56.172014  528268 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:43:56.172288  528268 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1206 10:43:56.172293  528268 kubeadm.go:319] 
	I1206 10:43:56.172374  528268 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:43:56.172501  528268 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000163749s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:43:56.172597  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:43:56.619462  528268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:43:56.633229  528268 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:43:56.633287  528268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:43:56.641609  528268 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:43:56.641619  528268 kubeadm.go:158] found existing configuration files:
	
	I1206 10:43:56.641669  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:43:56.649494  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:43:56.649548  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:43:56.657009  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:43:56.665153  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:43:56.665204  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:43:56.672965  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:43:56.681003  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:43:56.681063  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:43:56.688721  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:43:56.696901  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:43:56.696963  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:43:56.704620  528268 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:43:56.745749  528268 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:43:56.745826  528268 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:43:56.814552  528268 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:43:56.814625  528268 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:43:56.814668  528268 kubeadm.go:319] OS: Linux
	I1206 10:43:56.814710  528268 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:43:56.814764  528268 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:43:56.814817  528268 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:43:56.814861  528268 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:43:56.814913  528268 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:43:56.814977  528268 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:43:56.815030  528268 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:43:56.815078  528268 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:43:56.815150  528268 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:43:56.882919  528268 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:43:56.883028  528268 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:43:56.883177  528268 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:43:56.891776  528268 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:43:56.897133  528268 out.go:252]   - Generating certificates and keys ...
	I1206 10:43:56.897243  528268 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:43:56.897331  528268 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:43:56.897418  528268 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:43:56.897483  528268 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:43:56.897556  528268 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:43:56.897613  528268 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:43:56.897679  528268 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:43:56.897743  528268 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:43:56.897822  528268 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:43:56.897898  528268 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:43:56.897938  528268 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:43:56.897997  528268 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:43:57.103756  528268 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:43:57.598666  528268 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:43:58.161834  528268 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:43:58.402161  528268 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:43:58.630471  528268 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:43:58.631113  528268 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:43:58.634023  528268 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:43:58.637198  528268 out.go:252]   - Booting up control plane ...
	I1206 10:43:58.637294  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:43:58.637640  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:43:58.639086  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:43:58.654264  528268 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:43:58.654366  528268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:43:58.662722  528268 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:43:58.663439  528268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:43:58.663774  528268 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:43:58.799365  528268 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:43:58.799473  528268 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:47:58.799403  528268 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000249913s
	I1206 10:47:58.799433  528268 kubeadm.go:319] 
	I1206 10:47:58.799491  528268 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:47:58.799521  528268 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:47:58.799619  528268 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:47:58.799623  528268 kubeadm.go:319] 
	I1206 10:47:58.799720  528268 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:47:58.799749  528268 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:47:58.799777  528268 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:47:58.799780  528268 kubeadm.go:319] 
	I1206 10:47:58.803822  528268 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:47:58.804249  528268 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:47:58.804357  528268 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:47:58.804590  528268 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:47:58.804595  528268 kubeadm.go:319] 
	I1206 10:47:58.804663  528268 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:47:58.804715  528268 kubeadm.go:403] duration metric: took 12m7.139257328s to StartCluster
	I1206 10:47:58.804746  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:47:58.804808  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:47:58.833842  528268 cri.go:89] found id: ""
	I1206 10:47:58.833855  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.833863  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:47:58.833869  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:47:58.833925  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:47:58.859642  528268 cri.go:89] found id: ""
	I1206 10:47:58.859656  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.859663  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:47:58.859668  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:47:58.859731  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:47:58.888835  528268 cri.go:89] found id: ""
	I1206 10:47:58.888850  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.888857  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:47:58.888863  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:47:58.888920  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:47:58.913692  528268 cri.go:89] found id: ""
	I1206 10:47:58.913706  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.913713  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:47:58.913718  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:47:58.913775  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:47:58.941639  528268 cri.go:89] found id: ""
	I1206 10:47:58.941653  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.941660  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:47:58.941671  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:47:58.941728  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:47:58.968219  528268 cri.go:89] found id: ""
	I1206 10:47:58.968240  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.968249  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:47:58.968254  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:47:58.968312  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:47:58.993376  528268 cri.go:89] found id: ""
	I1206 10:47:58.993390  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.993397  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:47:58.993405  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:47:58.993415  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:47:59.059491  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:47:59.059510  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:47:59.075692  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:47:59.075708  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:47:59.140902  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:47:59.133228   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.133791   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.135323   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.135733   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.137154   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:47:59.133228   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.133791   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.135323   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.135733   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.137154   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:47:59.140911  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:47:59.140922  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:47:59.218521  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:47:59.218539  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:47:59.255468  528268 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000249913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:47:59.255514  528268 out.go:285] * 
	W1206 10:47:59.255766  528268 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000249913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:47:59.255841  528268 out.go:285] * 
	W1206 10:47:59.258456  528268 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:47:59.265427  528268 out.go:203] 
	W1206 10:47:59.268413  528268 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000249913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:47:59.268473  528268 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:47:59.268491  528268 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:47:59.271584  528268 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.886838849Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=e2aa5af4-3e0c-4a29-a9b0-9e59e8da3ea3 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.888149098Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=2232845f-2ab4-48d6-ac34-944fdebda910 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.888749905Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=c67da188-42dd-470b-ae77-cf546f5b22af name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.889342319Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=7b189f38-b046-468f-93d2-aafc2f683ea0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.889870274Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=cce0b971-d053-408a-aced-c9bdb56d4198 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.890356696Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=2133806a-9696-4cef-a9b9-9f8ae49bcb1a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.890769463Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=4197f4de-a4d5-47d7-aee8-909523db8ff4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.510413066Z" level=info msg="Checking image status: kicbase/echo-server:functional-123579" id=03972bc3-b343-408f-b3f2-79f8c749bdd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.510587528Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.510631539Z" level=info msg="Image kicbase/echo-server:functional-123579 not found" id=03972bc3-b343-408f-b3f2-79f8c749bdd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.510692789Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-123579 found" id=03972bc3-b343-408f-b3f2-79f8c749bdd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.542613043Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-123579" id=58dbc605-d105-4be4-b25a-21c2b48f56f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.54278168Z" level=info msg="Image docker.io/kicbase/echo-server:functional-123579 not found" id=58dbc605-d105-4be4-b25a-21c2b48f56f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.542832714Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-123579 found" id=58dbc605-d105-4be4-b25a-21c2b48f56f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.568965528Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-123579" id=0d06a5de-c1f5-4ecd-8470-3e3f2af12cd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.569093041Z" level=info msg="Image localhost/kicbase/echo-server:functional-123579 not found" id=0d06a5de-c1f5-4ecd-8470-3e3f2af12cd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.569130307Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-123579 found" id=0d06a5de-c1f5-4ecd-8470-3e3f2af12cd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.415971983Z" level=info msg="Checking image status: kicbase/echo-server:functional-123579" id=d02ceb5e-e1d4-444e-b5cf-afd7146cf8a4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.416234295Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.416285124Z" level=info msg="Image kicbase/echo-server:functional-123579 not found" id=d02ceb5e-e1d4-444e-b5cf-afd7146cf8a4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.416360913Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-123579 found" id=d02ceb5e-e1d4-444e-b5cf-afd7146cf8a4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.443629234Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-123579" id=cdf51062-f60d-426d-8465-769b2314eeb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.443787499Z" level=info msg="Image docker.io/kicbase/echo-server:functional-123579 not found" id=cdf51062-f60d-426d-8465-769b2314eeb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.443828999Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-123579 found" id=cdf51062-f60d-426d-8465-769b2314eeb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.48107794Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-123579" id=b88f3676-3120-4861-8534-602a63bfd49e name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:50:06.995491   23523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:50:06.996274   23523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:50:06.998063   23523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:50:06.998465   23523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:50:07.000759   23523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:20] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000983] FS-Cache: O-cookie d=000000005fa08aa9{9P.session} n=00000000effdd306
	[  +0.001108] FS-Cache: O-key=[10] '34323935383339353739'
	[  +0.000774] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001064] FS-Cache: N-cookie d=000000005fa08aa9{9P.session} n=00000000d1a54e80
	[  +0.001158] FS-Cache: N-key=[10] '34323935383339353739'
	[Dec 6 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.091742] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:17] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:18] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:50:07 up  3:32,  0 user,  load average: 0.64, 0.37, 0.49
	Linux functional-123579 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:50:04 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:50:05 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2296.
	Dec 06 10:50:05 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:50:05 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:50:05 functional-123579 kubelet[23384]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:50:05 functional-123579 kubelet[23384]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:50:05 functional-123579 kubelet[23384]: E1206 10:50:05.249357   23384 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:50:05 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:50:05 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:50:05 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2297.
	Dec 06 10:50:05 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:50:05 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:50:05 functional-123579 kubelet[23421]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:50:05 functional-123579 kubelet[23421]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:50:05 functional-123579 kubelet[23421]: E1206 10:50:05.985224   23421 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:50:05 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:50:05 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:50:06 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2298.
	Dec 06 10:50:06 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:50:06 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:50:06 functional-123579 kubelet[23456]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:50:06 functional-123579 kubelet[23456]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:50:06 functional-123579 kubelet[23456]: E1206 10:50:06.737686   23456 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:50:06 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:50:06 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579: exit status 2 (361.337734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-123579" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-123579 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-123579 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (67.823194ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-123579 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-123579 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-123579 describe po hello-node-connect: exit status 1 (57.930801ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-123579 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-123579 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-123579 logs -l app=hello-node-connect: exit status 1 (59.025399ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-123579 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-123579 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-123579 describe svc hello-node-connect: exit status 1 (59.336817ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-123579 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-123579
helpers_test.go:243: (dbg) docker inspect functional-123579:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	        "Created": "2025-12-06T10:21:05.490589445Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 516908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:21:05.573219423Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hostname",
	        "HostsPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hosts",
	        "LogPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721-json.log",
	        "Name": "/functional-123579",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-123579:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-123579",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	                "LowerDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f-init/diff:/var/lib/docker/overlay2/cc06c0f1f442a7275dc247974ca9074508813cfb842de89bc5bb1dae1e824222/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-123579",
	                "Source": "/var/lib/docker/volumes/functional-123579/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-123579",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-123579",
	                "name.minikube.sigs.k8s.io": "functional-123579",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10921d51d4ec866d78853297249318b04ef864639c8e07349985c5733ba03a26",
	            "SandboxKey": "/var/run/docker/netns/10921d51d4ec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-123579": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:5b:29:c4:a4:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa75a7cb7ddfb7086d66f629904d681a84e2c9da78725396c4dc859cfc5aa536",
	                    "EndpointID": "eff9632b5a6c335169f4a61b3c9f1727c30b30183ac61ac9730ddb7b0d19cf24",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-123579",
	                        "86e8d3865f80"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579: exit status 2 (315.553306ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-123579 image ls                                                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ image   │ functional-123579 image ls                                                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ ssh     │ functional-123579 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ image   │ functional-123579 image save kicbase/echo-server:functional-123579 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ ssh     │ functional-123579 ssh sudo cat /etc/ssl/certs/4880682.pem                                                                                                 │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ image   │ functional-123579 image rm kicbase/echo-server:functional-123579 --alsologtostderr                                                                        │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ ssh     │ functional-123579 ssh sudo cat /usr/share/ca-certificates/4880682.pem                                                                                     │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ image   │ functional-123579 image ls                                                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ ssh     │ functional-123579 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ image   │ functional-123579 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ ssh     │ functional-123579 ssh sudo cat /etc/test/nested/copy/488068/hosts                                                                                         │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ image   │ functional-123579 image ls                                                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ service │ functional-123579 service list                                                                                                                            │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ image   │ functional-123579 image save --daemon kicbase/echo-server:functional-123579 --alsologtostderr                                                             │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ service │ functional-123579 service list -o json                                                                                                                    │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ service │ functional-123579 service --namespace=default --https --url hello-node                                                                                    │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ ssh     │ functional-123579 ssh echo hello                                                                                                                          │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ service │ functional-123579 service hello-node --url --format={{.IP}}                                                                                               │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ ssh     │ functional-123579 ssh cat /etc/hostname                                                                                                                   │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ service │ functional-123579 service hello-node --url                                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ tunnel  │ functional-123579 tunnel --alsologtostderr                                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ tunnel  │ functional-123579 tunnel --alsologtostderr                                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ tunnel  │ functional-123579 tunnel --alsologtostderr                                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ addons  │ functional-123579 addons list                                                                                                                             │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ addons  │ functional-123579 addons list -o json                                                                                                                     │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:35:46
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:35:46.955658  528268 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:35:46.955828  528268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:35:46.955833  528268 out.go:374] Setting ErrFile to fd 2...
	I1206 10:35:46.955837  528268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:35:46.956177  528268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:35:46.956655  528268 out.go:368] Setting JSON to false
	I1206 10:35:46.957664  528268 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11898,"bootTime":1765005449,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:35:46.957734  528268 start.go:143] virtualization:  
	I1206 10:35:46.961283  528268 out.go:179] * [functional-123579] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:35:46.964510  528268 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:35:46.964613  528268 notify.go:221] Checking for updates...
	I1206 10:35:46.968278  528268 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:35:46.971356  528268 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:35:46.974199  528268 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:35:46.977104  528268 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:35:46.980765  528268 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:35:46.984213  528268 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:35:46.984322  528268 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:35:47.012645  528268 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:35:47.012749  528268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:35:47.074577  528268 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:35:47.064697556 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:35:47.074671  528268 docker.go:319] overlay module found
	I1206 10:35:47.077640  528268 out.go:179] * Using the docker driver based on existing profile
	I1206 10:35:47.080521  528268 start.go:309] selected driver: docker
	I1206 10:35:47.080533  528268 start.go:927] validating driver "docker" against &{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:35:47.080637  528268 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:35:47.080758  528268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:35:47.138440  528268 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:35:47.128848609 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:35:47.138821  528268 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:35:47.138844  528268 cni.go:84] Creating CNI manager for ""
	I1206 10:35:47.138899  528268 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:35:47.138936  528268 start.go:353] cluster config:
	{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:35:47.144166  528268 out.go:179] * Starting "functional-123579" primary control-plane node in "functional-123579" cluster
	I1206 10:35:47.147068  528268 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:35:47.149949  528268 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:35:47.152780  528268 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:35:47.152816  528268 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:35:47.152824  528268 cache.go:65] Caching tarball of preloaded images
	I1206 10:35:47.152870  528268 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:35:47.152921  528268 preload.go:238] Found /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:35:47.152931  528268 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:35:47.153043  528268 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/config.json ...
	I1206 10:35:47.172511  528268 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:35:47.172523  528268 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:35:47.172545  528268 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:35:47.172580  528268 start.go:360] acquireMachinesLock for functional-123579: {Name:mk35a9adf20f50a3c49b774a4ee092917f16cc66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:35:47.172652  528268 start.go:364] duration metric: took 54.497µs to acquireMachinesLock for "functional-123579"
	I1206 10:35:47.172672  528268 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:35:47.172676  528268 fix.go:54] fixHost starting: 
	I1206 10:35:47.172937  528268 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:35:47.189604  528268 fix.go:112] recreateIfNeeded on functional-123579: state=Running err=<nil>
	W1206 10:35:47.189624  528268 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:35:47.192615  528268 out.go:252] * Updating the running docker "functional-123579" container ...
	I1206 10:35:47.192637  528268 machine.go:94] provisionDockerMachine start ...
	I1206 10:35:47.192731  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:47.209670  528268 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:47.209990  528268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:35:47.209996  528268 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:35:47.362840  528268 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:35:47.362854  528268 ubuntu.go:182] provisioning hostname "functional-123579"
	I1206 10:35:47.362918  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:47.381544  528268 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:47.381860  528268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:35:47.381868  528268 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-123579 && echo "functional-123579" | sudo tee /etc/hostname
	I1206 10:35:47.544930  528268 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:35:47.545031  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:47.563487  528268 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:47.563810  528268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:35:47.563823  528268 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-123579' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-123579/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-123579' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:35:47.717170  528268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:35:47.717187  528268 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-484819/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-484819/.minikube}
	I1206 10:35:47.717204  528268 ubuntu.go:190] setting up certificates
	I1206 10:35:47.717211  528268 provision.go:84] configureAuth start
	I1206 10:35:47.717282  528268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:35:47.741856  528268 provision.go:143] copyHostCerts
	I1206 10:35:47.741924  528268 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem, removing ...
	I1206 10:35:47.741936  528268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 10:35:47.742009  528268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem (1082 bytes)
	I1206 10:35:47.742105  528268 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem, removing ...
	I1206 10:35:47.742109  528268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 10:35:47.742132  528268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem (1123 bytes)
	I1206 10:35:47.742180  528268 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem, removing ...
	I1206 10:35:47.742184  528268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 10:35:47.742206  528268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem (1675 bytes)
	I1206 10:35:47.742252  528268 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem org=jenkins.functional-123579 san=[127.0.0.1 192.168.49.2 functional-123579 localhost minikube]
	I1206 10:35:47.924439  528268 provision.go:177] copyRemoteCerts
	I1206 10:35:47.924500  528268 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:35:47.924538  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:47.942367  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:48.047397  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:35:48.065928  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:35:48.085149  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 10:35:48.103937  528268 provision.go:87] duration metric: took 386.701009ms to configureAuth
	I1206 10:35:48.103956  528268 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:35:48.104161  528268 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:35:48.104265  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.122386  528268 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:48.122699  528268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:35:48.122711  528268 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:35:48.484149  528268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:35:48.484161  528268 machine.go:97] duration metric: took 1.291517603s to provisionDockerMachine
	I1206 10:35:48.484171  528268 start.go:293] postStartSetup for "functional-123579" (driver="docker")
	I1206 10:35:48.484183  528268 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:35:48.484243  528268 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:35:48.484311  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.507680  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:48.615171  528268 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:35:48.618416  528268 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:35:48.618434  528268 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:35:48.618444  528268 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/addons for local assets ...
	I1206 10:35:48.618496  528268 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/files for local assets ...
	I1206 10:35:48.618569  528268 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> 4880682.pem in /etc/ssl/certs
	I1206 10:35:48.618650  528268 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts -> hosts in /etc/test/nested/copy/488068
	I1206 10:35:48.618693  528268 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488068
	I1206 10:35:48.626464  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:35:48.643882  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts --> /etc/test/nested/copy/488068/hosts (40 bytes)
	I1206 10:35:48.662582  528268 start.go:296] duration metric: took 178.395271ms for postStartSetup
	I1206 10:35:48.662675  528268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:35:48.662713  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.680751  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:48.784322  528268 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:35:48.789238  528268 fix.go:56] duration metric: took 1.616554387s for fixHost
	I1206 10:35:48.789253  528268 start.go:83] releasing machines lock for "functional-123579", held for 1.616594099s
	I1206 10:35:48.789324  528268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:35:48.807477  528268 ssh_runner.go:195] Run: cat /version.json
	I1206 10:35:48.807520  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.807562  528268 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:35:48.807618  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.828942  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:48.845083  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:49.020126  528268 ssh_runner.go:195] Run: systemctl --version
	I1206 10:35:49.026608  528268 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:35:49.065500  528268 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:35:49.069961  528268 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:35:49.070024  528268 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:35:49.077978  528268 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:35:49.077992  528268 start.go:496] detecting cgroup driver to use...
	I1206 10:35:49.078033  528268 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:35:49.078078  528268 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:35:49.093402  528268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:35:49.106707  528268 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:35:49.106771  528268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:35:49.122603  528268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:35:49.135424  528268 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:35:49.251969  528268 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:35:49.384025  528268 docker.go:234] disabling docker service ...
	I1206 10:35:49.384082  528268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:35:49.398904  528268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:35:49.412283  528268 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:35:49.535452  528268 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:35:49.651851  528268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:35:49.665735  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:35:49.680503  528268 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:35:49.680561  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.689947  528268 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:35:49.690006  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.699358  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.708725  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.718744  528268 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:35:49.727534  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.737013  528268 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.745582  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.754308  528268 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:35:49.762144  528268 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:35:49.769875  528268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:35:49.884338  528268 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:35:50.052236  528268 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:35:50.052348  528268 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:35:50.057582  528268 start.go:564] Will wait 60s for crictl version
	I1206 10:35:50.057651  528268 ssh_runner.go:195] Run: which crictl
	I1206 10:35:50.062638  528268 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:35:50.100652  528268 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:35:50.100743  528268 ssh_runner.go:195] Run: crio --version
	I1206 10:35:50.139579  528268 ssh_runner.go:195] Run: crio --version
	I1206 10:35:50.174800  528268 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:35:50.177732  528268 cli_runner.go:164] Run: docker network inspect functional-123579 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:35:50.194850  528268 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:35:50.201950  528268 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 10:35:50.204938  528268 kubeadm.go:884] updating cluster {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:35:50.205078  528268 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:35:50.205145  528268 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:35:50.240680  528268 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:35:50.240692  528268 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:35:50.240750  528268 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:35:50.267939  528268 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:35:50.267955  528268 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:35:50.267962  528268 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:35:50.268053  528268 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-123579 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:35:50.268129  528268 ssh_runner.go:195] Run: crio config
	I1206 10:35:50.326220  528268 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 10:35:50.326240  528268 cni.go:84] Creating CNI manager for ""
	I1206 10:35:50.326248  528268 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:35:50.326256  528268 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:35:50.326280  528268 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-123579 NodeName:functional-123579 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:35:50.326407  528268 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-123579"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:35:50.326477  528268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:35:50.334319  528268 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:35:50.334378  528268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:35:50.341826  528268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:35:50.354245  528268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:35:50.367015  528268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1206 10:35:50.379350  528268 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:35:50.382958  528268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:35:50.504018  528268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:35:50.930865  528268 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579 for IP: 192.168.49.2
	I1206 10:35:50.930875  528268 certs.go:195] generating shared ca certs ...
	I1206 10:35:50.930889  528268 certs.go:227] acquiring lock for ca certs: {Name:mk654f77abd8383620ce6ddae56f2a6a8c1d96d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:35:50.931046  528268 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key
	I1206 10:35:50.931093  528268 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key
	I1206 10:35:50.931099  528268 certs.go:257] generating profile certs ...
	I1206 10:35:50.931220  528268 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key
	I1206 10:35:50.931274  528268 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key.fda7c087
	I1206 10:35:50.931318  528268 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key
	I1206 10:35:50.931430  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem (1338 bytes)
	W1206 10:35:50.931460  528268 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068_empty.pem, impossibly tiny 0 bytes
	I1206 10:35:50.931466  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 10:35:50.931493  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:35:50.931515  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:35:50.931536  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem (1675 bytes)
	I1206 10:35:50.931577  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:35:50.932148  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:35:50.953643  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:35:50.975543  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:35:50.998708  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 10:35:51.019841  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:35:51.038179  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:35:51.055740  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:35:51.075573  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:35:51.094756  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /usr/share/ca-certificates/4880682.pem (1708 bytes)
	I1206 10:35:51.113922  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:35:51.132368  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem --> /usr/share/ca-certificates/488068.pem (1338 bytes)
	I1206 10:35:51.150650  528268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:35:51.163984  528268 ssh_runner.go:195] Run: openssl version
	I1206 10:35:51.171418  528268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4880682.pem
	I1206 10:35:51.179298  528268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4880682.pem /etc/ssl/certs/4880682.pem
	I1206 10:35:51.187013  528268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4880682.pem
	I1206 10:35:51.190756  528268 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 10:35:51.190814  528268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4880682.pem
	I1206 10:35:51.231889  528268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:35:51.239348  528268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:51.246609  528268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:35:51.254276  528268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:51.258574  528268 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:51.258631  528268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:51.301011  528268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:35:51.308790  528268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488068.pem
	I1206 10:35:51.316400  528268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488068.pem /etc/ssl/certs/488068.pem
	I1206 10:35:51.324195  528268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488068.pem
	I1206 10:35:51.328353  528268 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 10:35:51.328409  528268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488068.pem
	I1206 10:35:51.371753  528268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:35:51.379339  528268 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:35:51.383319  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:35:51.424469  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:35:51.465529  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:35:51.511345  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:35:51.565170  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:35:51.614532  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:35:51.665468  528268 kubeadm.go:401] StartCluster: {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:35:51.665553  528268 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:35:51.665612  528268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:35:51.699589  528268 cri.go:89] found id: ""
	I1206 10:35:51.699652  528268 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:35:51.708250  528268 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:35:51.708260  528268 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:35:51.708318  528268 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:35:51.716593  528268 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:35:51.717135  528268 kubeconfig.go:125] found "functional-123579" server: "https://192.168.49.2:8441"
	I1206 10:35:51.718506  528268 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:35:51.728290  528268 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 10:21:13.758601441 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 10:35:50.371679399 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1206 10:35:51.728307  528268 kubeadm.go:1161] stopping kube-system containers ...
	I1206 10:35:51.728319  528268 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 10:35:51.728381  528268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:35:51.763757  528268 cri.go:89] found id: ""
	I1206 10:35:51.763820  528268 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 10:35:51.777420  528268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:35:51.785097  528268 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  6 10:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  6 10:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  6 10:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  6 10:25 /etc/kubernetes/scheduler.conf
	
	I1206 10:35:51.785162  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:35:51.792642  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:35:51.800316  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:35:51.800387  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:35:51.808313  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:35:51.815662  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:35:51.815715  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:35:51.823153  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:35:51.831093  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:35:51.831167  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:35:51.838577  528268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:35:51.846346  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:51.894809  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:52.979571  528268 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.084737023s)
	I1206 10:35:52.979630  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:53.188528  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:53.255794  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:53.309672  528268 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:35:53.309740  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:53.810758  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:54.309899  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:54.810832  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:55.309958  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:55.809819  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:56.310103  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:56.809902  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:57.309923  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:57.809975  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:58.310731  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:58.809924  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:59.310585  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:59.810731  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:00.309923  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:00.810538  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:01.310473  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:01.810374  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:02.310412  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:02.809925  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:03.309918  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:03.810667  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:04.310497  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:04.810559  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:05.310616  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:05.810787  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:06.310760  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:06.810542  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:07.310481  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:07.810515  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:08.310271  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:08.810300  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:09.309935  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:09.809899  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:10.310756  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:10.809928  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:11.309919  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:11.809916  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:12.310322  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:12.809962  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:13.309904  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:13.809901  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:14.309825  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:14.809939  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:15.309858  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:15.810769  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:16.310915  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:16.809905  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:17.310298  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:17.809935  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:18.310774  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:18.810876  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:19.310588  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:19.810539  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:20.309961  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:20.810313  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:21.310718  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:21.810176  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:22.310761  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:22.809819  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:23.310605  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:23.810607  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:24.310709  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:24.810672  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:25.309883  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:25.810296  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:26.309901  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:26.810157  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:27.310838  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:27.810698  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:28.309956  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:28.809934  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:29.310713  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:29.810598  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:30.310564  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:30.809937  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:31.309915  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:31.810618  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:32.310478  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:32.809942  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:33.310175  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:33.810817  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:34.310221  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:34.810764  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:35.309907  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:35.810700  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:36.310275  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:36.810581  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:37.310397  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:37.809951  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:38.310518  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:38.810174  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:39.310213  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:39.810271  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:40.309911  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:40.810748  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:41.310557  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:41.810632  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:42.309870  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:42.810506  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:43.309942  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:43.810676  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:44.310713  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:44.810703  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:45.310440  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:45.810823  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:46.309845  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:46.810726  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:47.310769  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:47.809917  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:48.310694  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:48.810273  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:49.310273  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:49.810301  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:50.309899  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:50.809907  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:51.309963  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:51.810551  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:52.310532  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:52.810599  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:53.310630  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:36:53.310706  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:36:53.342266  528268 cri.go:89] found id: ""
	I1206 10:36:53.342280  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.342287  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:36:53.342292  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:36:53.342356  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:36:53.368755  528268 cri.go:89] found id: ""
	I1206 10:36:53.368774  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.368781  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:36:53.368785  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:36:53.368846  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:36:53.393431  528268 cri.go:89] found id: ""
	I1206 10:36:53.393447  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.393454  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:36:53.393459  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:36:53.393515  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:36:53.418954  528268 cri.go:89] found id: ""
	I1206 10:36:53.418967  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.418974  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:36:53.418979  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:36:53.419036  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:36:53.444726  528268 cri.go:89] found id: ""
	I1206 10:36:53.444740  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.444747  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:36:53.444752  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:36:53.444809  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:36:53.469041  528268 cri.go:89] found id: ""
	I1206 10:36:53.469054  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.469062  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:36:53.469067  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:36:53.469122  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:36:53.494455  528268 cri.go:89] found id: ""
	I1206 10:36:53.494468  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.494475  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:36:53.494483  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:36:53.494496  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:36:53.557127  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:36:53.549369   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.549959   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.551594   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.551939   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.553382   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:36:53.549369   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.549959   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.551594   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.551939   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.553382   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:36:53.557137  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:36:53.557148  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:36:53.629870  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:36:53.629900  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:36:53.661451  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:36:53.661466  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:36:53.730909  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:36:53.730927  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:36:56.247245  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:56.257306  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:36:56.257364  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:36:56.286141  528268 cri.go:89] found id: ""
	I1206 10:36:56.286155  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.286163  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:36:56.286168  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:36:56.286228  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:36:56.313467  528268 cri.go:89] found id: ""
	I1206 10:36:56.313481  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.313488  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:36:56.313499  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:36:56.313559  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:36:56.340777  528268 cri.go:89] found id: ""
	I1206 10:36:56.340791  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.340798  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:36:56.340803  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:36:56.340862  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:36:56.367085  528268 cri.go:89] found id: ""
	I1206 10:36:56.367099  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.367106  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:36:56.367111  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:36:56.367188  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:36:56.392392  528268 cri.go:89] found id: ""
	I1206 10:36:56.392407  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.392414  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:36:56.392420  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:36:56.392482  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:36:56.417786  528268 cri.go:89] found id: ""
	I1206 10:36:56.417799  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.417807  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:36:56.417812  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:36:56.417871  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:36:56.443872  528268 cri.go:89] found id: ""
	I1206 10:36:56.443886  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.443893  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:36:56.443901  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:36:56.443911  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:36:56.509704  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:36:56.509723  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:36:56.524726  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:36:56.524742  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:36:56.590779  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:36:56.582349   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.583075   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.584764   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.585326   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.586966   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:36:56.582349   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.583075   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.584764   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.585326   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.586966   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:36:56.590789  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:36:56.590799  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:36:56.657863  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:36:56.657883  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:36:59.188879  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:59.199665  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:36:59.199726  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:36:59.232126  528268 cri.go:89] found id: ""
	I1206 10:36:59.232140  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.232148  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:36:59.232153  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:36:59.232212  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:36:59.257550  528268 cri.go:89] found id: ""
	I1206 10:36:59.257564  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.257571  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:36:59.257576  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:36:59.257633  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:36:59.282608  528268 cri.go:89] found id: ""
	I1206 10:36:59.282623  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.282630  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:36:59.282636  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:36:59.282698  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:36:59.312791  528268 cri.go:89] found id: ""
	I1206 10:36:59.312806  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.312813  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:36:59.312819  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:36:59.312881  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:36:59.339361  528268 cri.go:89] found id: ""
	I1206 10:36:59.339376  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.339383  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:36:59.339388  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:36:59.339447  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:36:59.366255  528268 cri.go:89] found id: ""
	I1206 10:36:59.366269  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.366276  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:36:59.366281  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:36:59.366339  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:36:59.394131  528268 cri.go:89] found id: ""
	I1206 10:36:59.394145  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.394152  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:36:59.394172  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:36:59.394182  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:36:59.462514  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:36:59.462536  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:36:59.491731  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:36:59.491747  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:36:59.562406  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:36:59.562426  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:36:59.577286  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:36:59.577302  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:36:59.642145  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:36:59.633850   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.634393   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.636035   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.636643   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.638279   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:36:59.633850   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.634393   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.636035   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.636643   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.638279   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:02.143135  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:02.153343  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:02.153402  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:02.182430  528268 cri.go:89] found id: ""
	I1206 10:37:02.182453  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.182460  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:02.182466  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:02.182529  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:02.217140  528268 cri.go:89] found id: ""
	I1206 10:37:02.217164  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.217171  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:02.217176  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:02.217241  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:02.264761  528268 cri.go:89] found id: ""
	I1206 10:37:02.264775  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.264795  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:02.264800  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:02.264857  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:02.295104  528268 cri.go:89] found id: ""
	I1206 10:37:02.295118  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.295161  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:02.295166  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:02.295232  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:02.324690  528268 cri.go:89] found id: ""
	I1206 10:37:02.324704  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.324711  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:02.324716  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:02.324776  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:02.354165  528268 cri.go:89] found id: ""
	I1206 10:37:02.354179  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.354187  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:02.354192  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:02.354250  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:02.379657  528268 cri.go:89] found id: ""
	I1206 10:37:02.379671  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.379679  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:02.379686  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:02.379697  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:02.449725  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:02.449746  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:02.464766  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:02.464783  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:02.527444  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:02.518942   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.519712   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.521458   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.522038   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.523598   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:02.518942   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.519712   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.521458   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.522038   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.523598   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:02.527457  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:02.527467  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:02.595482  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:02.595503  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:05.126581  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:05.136725  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:05.136783  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:05.162008  528268 cri.go:89] found id: ""
	I1206 10:37:05.162022  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.162049  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:05.162055  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:05.162123  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:05.190290  528268 cri.go:89] found id: ""
	I1206 10:37:05.190305  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.190313  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:05.190318  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:05.190399  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:05.222971  528268 cri.go:89] found id: ""
	I1206 10:37:05.223000  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.223008  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:05.223013  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:05.223083  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:05.249192  528268 cri.go:89] found id: ""
	I1206 10:37:05.249206  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.249213  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:05.249218  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:05.249285  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:05.280084  528268 cri.go:89] found id: ""
	I1206 10:37:05.280097  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.280104  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:05.280110  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:05.280176  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:05.306008  528268 cri.go:89] found id: ""
	I1206 10:37:05.306036  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.306044  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:05.306049  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:05.306115  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:05.331829  528268 cri.go:89] found id: ""
	I1206 10:37:05.331843  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.331850  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:05.331858  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:05.331868  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:05.394775  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:05.386653   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.387484   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.389032   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.389488   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.390957   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:05.386653   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.387484   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.389032   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.389488   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.390957   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:05.394787  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:05.394798  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:05.463063  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:05.463082  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:05.496791  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:05.496808  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:05.562749  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:05.562768  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:08.077865  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:08.088556  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:08.088628  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:08.114942  528268 cri.go:89] found id: ""
	I1206 10:37:08.114956  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.114963  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:08.114969  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:08.115027  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:08.141141  528268 cri.go:89] found id: ""
	I1206 10:37:08.141155  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.141162  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:08.141167  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:08.141235  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:08.166303  528268 cri.go:89] found id: ""
	I1206 10:37:08.166318  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.166325  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:08.166334  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:08.166394  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:08.199234  528268 cri.go:89] found id: ""
	I1206 10:37:08.199248  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.199255  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:08.199260  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:08.199326  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:08.231753  528268 cri.go:89] found id: ""
	I1206 10:37:08.231767  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.231774  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:08.231780  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:08.231842  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:08.260152  528268 cri.go:89] found id: ""
	I1206 10:37:08.260166  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.260173  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:08.260179  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:08.260241  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:08.285346  528268 cri.go:89] found id: ""
	I1206 10:37:08.285360  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.285367  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:08.285378  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:08.285388  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:08.353719  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:08.353740  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:08.385085  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:08.385101  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:08.459734  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:08.459762  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:08.474846  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:08.474862  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:08.546432  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:08.537844   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.538577   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.540294   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.540933   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.542525   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:08.537844   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.538577   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.540294   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.540933   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.542525   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:11.048129  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:11.058654  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:11.058714  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:11.086873  528268 cri.go:89] found id: ""
	I1206 10:37:11.086889  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.086896  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:11.086903  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:11.086965  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:11.113880  528268 cri.go:89] found id: ""
	I1206 10:37:11.113904  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.113912  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:11.113918  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:11.113987  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:11.142338  528268 cri.go:89] found id: ""
	I1206 10:37:11.142361  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.142370  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:11.142375  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:11.142448  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:11.168341  528268 cri.go:89] found id: ""
	I1206 10:37:11.168355  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.168362  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:11.168368  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:11.168425  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:11.218236  528268 cri.go:89] found id: ""
	I1206 10:37:11.218277  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.218285  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:11.218290  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:11.218357  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:11.257366  528268 cri.go:89] found id: ""
	I1206 10:37:11.257379  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.257386  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:11.257391  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:11.257455  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:11.283202  528268 cri.go:89] found id: ""
	I1206 10:37:11.283224  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.283235  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:11.283251  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:11.283269  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:11.349630  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:11.349650  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:11.365578  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:11.365606  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:11.431959  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:11.422904   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.423556   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.425277   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.425941   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.427652   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:11.422904   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.423556   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.425277   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.425941   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.427652   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:11.431970  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:11.431981  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:11.502903  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:11.502922  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:14.032953  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:14.043177  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:14.043291  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:14.068855  528268 cri.go:89] found id: ""
	I1206 10:37:14.068870  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.068877  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:14.068882  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:14.068946  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:14.094277  528268 cri.go:89] found id: ""
	I1206 10:37:14.094290  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.094308  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:14.094315  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:14.094372  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:14.119916  528268 cri.go:89] found id: ""
	I1206 10:37:14.119930  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.119948  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:14.119954  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:14.120029  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:14.144999  528268 cri.go:89] found id: ""
	I1206 10:37:14.145012  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.145020  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:14.145026  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:14.145088  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:14.170372  528268 cri.go:89] found id: ""
	I1206 10:37:14.170386  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.170404  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:14.170409  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:14.170475  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:14.220015  528268 cri.go:89] found id: ""
	I1206 10:37:14.220029  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.220036  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:14.220041  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:14.220102  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:14.249187  528268 cri.go:89] found id: ""
	I1206 10:37:14.249201  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.249208  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:14.249216  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:14.249226  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:14.315809  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:14.315830  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:14.331228  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:14.331245  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:14.394665  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:14.386558   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.387326   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.388992   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.389309   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.390775   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:14.386558   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.387326   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.388992   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.389309   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.390775   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:14.394676  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:14.394686  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:14.466599  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:14.466623  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:16.996304  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:17.008394  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:17.008453  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:17.036500  528268 cri.go:89] found id: ""
	I1206 10:37:17.036513  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.036521  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:17.036526  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:17.036591  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:17.064759  528268 cri.go:89] found id: ""
	I1206 10:37:17.064773  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.064780  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:17.064785  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:17.064846  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:17.095263  528268 cri.go:89] found id: ""
	I1206 10:37:17.095276  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.095284  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:17.095300  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:17.095364  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:17.121651  528268 cri.go:89] found id: ""
	I1206 10:37:17.121665  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.121673  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:17.121678  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:17.121747  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:17.148683  528268 cri.go:89] found id: ""
	I1206 10:37:17.148697  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.148704  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:17.148711  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:17.148773  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:17.180504  528268 cri.go:89] found id: ""
	I1206 10:37:17.180518  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.180535  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:17.180542  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:17.180611  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:17.208816  528268 cri.go:89] found id: ""
	I1206 10:37:17.208830  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.208837  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:17.208844  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:17.208854  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:17.277798  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:17.277818  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:17.292728  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:17.292743  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:17.366791  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:17.357858   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.358712   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.360589   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.361199   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.362779   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:17.357858   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.358712   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.360589   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.361199   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.362779   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:17.366801  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:17.366812  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:17.434192  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:17.434212  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:19.971273  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:19.981226  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:19.981286  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:20.019762  528268 cri.go:89] found id: ""
	I1206 10:37:20.019777  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.019785  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:20.019791  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:20.019866  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:20.047256  528268 cri.go:89] found id: ""
	I1206 10:37:20.047270  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.047278  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:20.047283  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:20.047345  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:20.075694  528268 cri.go:89] found id: ""
	I1206 10:37:20.075708  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.075716  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:20.075721  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:20.075785  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:20.105896  528268 cri.go:89] found id: ""
	I1206 10:37:20.105910  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.105917  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:20.105922  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:20.105981  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:20.131910  528268 cri.go:89] found id: ""
	I1206 10:37:20.131923  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.131930  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:20.131935  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:20.131997  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:20.157115  528268 cri.go:89] found id: ""
	I1206 10:37:20.157129  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.157135  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:20.157140  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:20.157202  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:20.188374  528268 cri.go:89] found id: ""
	I1206 10:37:20.188394  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.188401  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:20.188423  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:20.188434  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:20.267587  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:20.267607  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:20.283222  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:20.283238  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:20.348772  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:20.340427   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.341070   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.342551   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.342988   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.344527   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:20.340427   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.341070   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.342551   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.342988   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.344527   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:20.348783  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:20.348796  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:20.415451  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:20.415474  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:22.948223  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:22.959160  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:22.959221  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:22.985131  528268 cri.go:89] found id: ""
	I1206 10:37:22.985144  528268 logs.go:282] 0 containers: []
	W1206 10:37:22.985151  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:22.985156  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:22.985242  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:23.012336  528268 cri.go:89] found id: ""
	I1206 10:37:23.012350  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.012358  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:23.012363  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:23.012433  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:23.037784  528268 cri.go:89] found id: ""
	I1206 10:37:23.037808  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.037816  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:23.037822  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:23.037899  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:23.066240  528268 cri.go:89] found id: ""
	I1206 10:37:23.066254  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.066262  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:23.066267  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:23.066335  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:23.090898  528268 cri.go:89] found id: ""
	I1206 10:37:23.090912  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.090921  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:23.090926  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:23.090993  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:23.116011  528268 cri.go:89] found id: ""
	I1206 10:37:23.116039  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.116047  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:23.116052  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:23.116127  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:23.140768  528268 cri.go:89] found id: ""
	I1206 10:37:23.140781  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.140788  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:23.140796  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:23.140806  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:23.210300  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:23.210319  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:23.229296  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:23.229311  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:23.297415  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:23.288972   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.289757   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.291364   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.291944   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.293619   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:23.288972   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.289757   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.291364   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.291944   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.293619   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:23.297428  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:23.297438  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:23.364180  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:23.364200  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:25.892120  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:25.902322  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:25.902381  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:25.931154  528268 cri.go:89] found id: ""
	I1206 10:37:25.931168  528268 logs.go:282] 0 containers: []
	W1206 10:37:25.931175  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:25.931180  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:25.931245  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:25.957709  528268 cri.go:89] found id: ""
	I1206 10:37:25.957724  528268 logs.go:282] 0 containers: []
	W1206 10:37:25.957731  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:25.957736  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:25.957793  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:25.985765  528268 cri.go:89] found id: ""
	I1206 10:37:25.985779  528268 logs.go:282] 0 containers: []
	W1206 10:37:25.985786  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:25.985791  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:25.985849  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:26.016739  528268 cri.go:89] found id: ""
	I1206 10:37:26.016859  528268 logs.go:282] 0 containers: []
	W1206 10:37:26.016867  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:26.016873  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:26.016945  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:26.043228  528268 cri.go:89] found id: ""
	I1206 10:37:26.043242  528268 logs.go:282] 0 containers: []
	W1206 10:37:26.043252  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:26.043258  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:26.043331  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:26.069862  528268 cri.go:89] found id: ""
	I1206 10:37:26.069888  528268 logs.go:282] 0 containers: []
	W1206 10:37:26.069896  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:26.069902  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:26.069979  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:26.097635  528268 cri.go:89] found id: ""
	I1206 10:37:26.097651  528268 logs.go:282] 0 containers: []
	W1206 10:37:26.097659  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:26.097666  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:26.097677  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:26.163107  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:26.163132  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:26.177703  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:26.177723  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:26.254904  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:26.246698   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.247514   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.249003   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.249473   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.250911   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:26.246698   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.247514   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.249003   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.249473   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.250911   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:26.254915  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:26.254927  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:26.322703  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:26.322723  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:28.850178  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:28.860819  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:28.860878  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:28.887162  528268 cri.go:89] found id: ""
	I1206 10:37:28.887175  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.887183  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:28.887188  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:28.887246  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:28.912223  528268 cri.go:89] found id: ""
	I1206 10:37:28.912237  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.912251  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:28.912256  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:28.912318  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:28.937893  528268 cri.go:89] found id: ""
	I1206 10:37:28.937907  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.937914  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:28.937920  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:28.937979  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:28.966798  528268 cri.go:89] found id: ""
	I1206 10:37:28.966812  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.966819  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:28.966825  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:28.966887  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:28.994392  528268 cri.go:89] found id: ""
	I1206 10:37:28.994406  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.994413  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:28.994418  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:28.994480  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:29.020703  528268 cri.go:89] found id: ""
	I1206 10:37:29.020718  528268 logs.go:282] 0 containers: []
	W1206 10:37:29.020725  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:29.020730  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:29.020788  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:29.049956  528268 cri.go:89] found id: ""
	I1206 10:37:29.049969  528268 logs.go:282] 0 containers: []
	W1206 10:37:29.049977  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:29.049986  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:29.049998  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:29.116113  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:29.116133  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:29.130937  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:29.130954  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:29.199649  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:29.191077   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.191848   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.193554   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.193889   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.195340   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:29.191077   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.191848   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.193554   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.193889   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.195340   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:29.199659  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:29.199670  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:29.271990  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:29.272011  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:31.801925  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:31.812057  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:31.812130  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:31.837642  528268 cri.go:89] found id: ""
	I1206 10:37:31.837656  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.837663  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:31.837668  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:31.837724  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:31.863706  528268 cri.go:89] found id: ""
	I1206 10:37:31.863721  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.863728  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:31.863733  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:31.863795  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:31.892284  528268 cri.go:89] found id: ""
	I1206 10:37:31.892298  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.892305  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:31.892310  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:31.892370  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:31.920973  528268 cri.go:89] found id: ""
	I1206 10:37:31.920987  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.920994  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:31.920999  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:31.921072  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:31.946196  528268 cri.go:89] found id: ""
	I1206 10:37:31.946209  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.946216  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:31.946221  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:31.946280  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:31.972154  528268 cri.go:89] found id: ""
	I1206 10:37:31.972168  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.972176  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:31.972182  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:31.972273  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:31.998166  528268 cri.go:89] found id: ""
	I1206 10:37:31.998179  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.998194  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:31.998202  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:31.998212  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:32.066002  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:32.066020  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:32.081440  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:32.081456  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:32.155010  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:32.146683   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.147230   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.149014   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.149511   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.151065   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:32.146683   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.147230   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.149014   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.149511   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.151065   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:32.155021  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:32.155032  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:32.239005  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:32.239035  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:34.779578  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:34.789994  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:34.790061  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:34.817069  528268 cri.go:89] found id: ""
	I1206 10:37:34.817083  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.817091  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:34.817096  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:34.817154  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:34.843456  528268 cri.go:89] found id: ""
	I1206 10:37:34.843470  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.843478  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:34.843483  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:34.843540  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:34.873150  528268 cri.go:89] found id: ""
	I1206 10:37:34.873164  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.873171  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:34.873176  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:34.873236  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:34.901463  528268 cri.go:89] found id: ""
	I1206 10:37:34.901476  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.901483  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:34.901489  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:34.901546  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:34.930362  528268 cri.go:89] found id: ""
	I1206 10:37:34.930376  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.930383  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:34.930389  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:34.930460  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:34.955907  528268 cri.go:89] found id: ""
	I1206 10:37:34.955920  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.955928  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:34.955936  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:34.955997  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:34.981646  528268 cri.go:89] found id: ""
	I1206 10:37:34.981660  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.981667  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:34.981676  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:34.981690  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:35.051925  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:35.051946  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:35.067379  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:35.067395  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:35.132911  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:35.124444   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.125082   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.126771   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.127367   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.128903   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:35.124444   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.125082   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.126771   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.127367   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.128903   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:35.132921  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:35.132932  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:35.203071  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:35.203091  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:37.738787  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:37.749325  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:37.749395  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:37.777933  528268 cri.go:89] found id: ""
	I1206 10:37:37.777947  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.777955  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:37.777961  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:37.778018  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:37.803626  528268 cri.go:89] found id: ""
	I1206 10:37:37.803640  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.803647  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:37.803652  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:37.803711  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:37.829518  528268 cri.go:89] found id: ""
	I1206 10:37:37.829532  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.829540  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:37.829545  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:37.829608  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:37.854832  528268 cri.go:89] found id: ""
	I1206 10:37:37.854846  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.854853  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:37.854858  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:37.854918  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:37.879627  528268 cri.go:89] found id: ""
	I1206 10:37:37.879641  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.879649  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:37.879654  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:37.879712  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:37.906054  528268 cri.go:89] found id: ""
	I1206 10:37:37.906067  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.906074  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:37.906080  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:37.906137  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:37.931611  528268 cri.go:89] found id: ""
	I1206 10:37:37.931624  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.931632  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:37.931640  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:37.931651  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:37.997740  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:37.997760  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:38.023284  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:38.023303  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:38.091986  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:38.082741   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.083460   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.085430   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.086101   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.087877   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:38.082741   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.083460   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.085430   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.086101   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.087877   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:38.092014  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:38.092027  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:38.163320  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:38.163343  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:40.709445  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:40.720016  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:40.720077  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:40.745539  528268 cri.go:89] found id: ""
	I1206 10:37:40.745554  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.745561  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:40.745566  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:40.745630  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:40.775524  528268 cri.go:89] found id: ""
	I1206 10:37:40.775538  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.775546  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:40.775552  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:40.775612  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:40.800974  528268 cri.go:89] found id: ""
	I1206 10:37:40.800988  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.800995  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:40.801001  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:40.801064  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:40.825855  528268 cri.go:89] found id: ""
	I1206 10:37:40.825869  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.825877  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:40.825882  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:40.825940  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:40.856039  528268 cri.go:89] found id: ""
	I1206 10:37:40.856052  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.856059  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:40.856064  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:40.856129  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:40.886499  528268 cri.go:89] found id: ""
	I1206 10:37:40.886513  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.886520  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:40.886527  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:40.886586  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:40.913975  528268 cri.go:89] found id: ""
	I1206 10:37:40.913989  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.913996  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:40.914004  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:40.914014  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:40.979882  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:40.979904  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:40.995137  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:40.995155  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:41.060228  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:41.051325   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.052002   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.053633   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.054141   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.055869   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:41.051325   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.052002   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.053633   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.054141   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.055869   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:41.060245  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:41.060258  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:41.130025  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:41.130046  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:43.659238  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:43.669354  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:43.669430  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:43.694872  528268 cri.go:89] found id: ""
	I1206 10:37:43.694886  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.694893  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:43.694899  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:43.694956  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:43.720265  528268 cri.go:89] found id: ""
	I1206 10:37:43.720278  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.720286  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:43.720290  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:43.720349  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:43.746213  528268 cri.go:89] found id: ""
	I1206 10:37:43.746226  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.746234  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:43.746239  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:43.746300  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:43.771902  528268 cri.go:89] found id: ""
	I1206 10:37:43.771916  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.771923  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:43.771928  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:43.771984  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:43.797840  528268 cri.go:89] found id: ""
	I1206 10:37:43.797854  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.797874  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:43.797879  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:43.797949  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:43.823569  528268 cri.go:89] found id: ""
	I1206 10:37:43.823583  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.823590  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:43.823596  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:43.823654  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:43.850154  528268 cri.go:89] found id: ""
	I1206 10:37:43.850169  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.850187  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:43.850196  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:43.850207  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:43.919668  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:43.919690  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:43.954253  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:43.954269  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:44.019533  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:44.019556  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:44.034911  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:44.034930  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:44.098130  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:44.089450   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.090461   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.091451   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.092313   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.093171   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:44.089450   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.090461   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.091451   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.092313   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.093171   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:46.599796  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:46.610343  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:46.610410  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:46.637289  528268 cri.go:89] found id: ""
	I1206 10:37:46.637304  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.637311  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:46.637317  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:46.637380  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:46.664098  528268 cri.go:89] found id: ""
	I1206 10:37:46.664112  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.664118  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:46.664123  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:46.664183  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:46.693606  528268 cri.go:89] found id: ""
	I1206 10:37:46.693619  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.693638  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:46.693644  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:46.693718  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:46.719425  528268 cri.go:89] found id: ""
	I1206 10:37:46.719438  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.719445  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:46.719451  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:46.719511  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:46.748960  528268 cri.go:89] found id: ""
	I1206 10:37:46.748974  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.748982  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:46.748987  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:46.749047  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:46.782749  528268 cri.go:89] found id: ""
	I1206 10:37:46.782763  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.782770  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:46.782776  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:46.782846  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:46.807615  528268 cri.go:89] found id: ""
	I1206 10:37:46.807629  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.807636  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:46.807644  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:46.807654  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:46.838618  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:46.838634  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:46.905518  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:46.905537  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:46.920399  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:46.920417  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:46.985957  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:46.978179   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.978741   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.980269   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.980715   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.982218   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:46.978179   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.978741   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.980269   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.980715   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.982218   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:46.985968  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:46.985981  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:49.555258  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:49.565209  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:49.565266  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:49.593833  528268 cri.go:89] found id: ""
	I1206 10:37:49.593846  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.593853  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:49.593858  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:49.593914  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:49.621098  528268 cri.go:89] found id: ""
	I1206 10:37:49.621111  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.621119  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:49.621124  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:49.621203  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:49.645669  528268 cri.go:89] found id: ""
	I1206 10:37:49.645681  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.645689  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:49.645694  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:49.645750  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:49.672058  528268 cri.go:89] found id: ""
	I1206 10:37:49.672072  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.672080  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:49.672085  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:49.672140  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:49.696988  528268 cri.go:89] found id: ""
	I1206 10:37:49.697002  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.697009  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:49.697015  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:49.697076  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:49.723261  528268 cri.go:89] found id: ""
	I1206 10:37:49.723275  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.723282  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:49.723287  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:49.723357  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:49.750307  528268 cri.go:89] found id: ""
	I1206 10:37:49.750321  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.750328  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:49.750336  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:49.750346  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:49.765699  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:49.765721  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:49.827929  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:49.819281   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.820177   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.821896   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.822193   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.823677   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:49.819281   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.820177   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.821896   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.822193   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.823677   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:49.827938  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:49.827962  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:49.899802  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:49.899820  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:49.928018  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:49.928035  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:52.495744  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:52.505888  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:52.505958  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:52.532610  528268 cri.go:89] found id: ""
	I1206 10:37:52.532623  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.532631  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:52.532636  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:52.532695  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:52.558679  528268 cri.go:89] found id: ""
	I1206 10:37:52.558692  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.558700  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:52.558705  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:52.558762  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:52.585203  528268 cri.go:89] found id: ""
	I1206 10:37:52.585217  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.585225  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:52.585230  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:52.585286  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:52.611483  528268 cri.go:89] found id: ""
	I1206 10:37:52.611496  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.611503  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:52.611510  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:52.611568  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:52.638054  528268 cri.go:89] found id: ""
	I1206 10:37:52.638067  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.638075  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:52.638080  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:52.638137  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:52.666746  528268 cri.go:89] found id: ""
	I1206 10:37:52.666760  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.666767  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:52.666773  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:52.666833  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:52.691974  528268 cri.go:89] found id: ""
	I1206 10:37:52.691997  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.692005  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:52.692015  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:52.692025  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:52.761093  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:52.761113  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:52.790376  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:52.790392  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:52.858897  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:52.858915  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:52.873906  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:52.873923  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:52.937907  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:52.929773   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.930648   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.932194   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.932561   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.934055   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:52.929773   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.930648   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.932194   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.932561   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.934055   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:55.439279  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:55.450466  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:55.450529  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:55.483494  528268 cri.go:89] found id: ""
	I1206 10:37:55.483508  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.483515  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:55.483520  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:55.483576  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:55.515860  528268 cri.go:89] found id: ""
	I1206 10:37:55.515874  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.515881  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:55.515886  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:55.515942  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:55.542224  528268 cri.go:89] found id: ""
	I1206 10:37:55.542239  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.542248  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:55.542253  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:55.542311  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:55.567547  528268 cri.go:89] found id: ""
	I1206 10:37:55.567561  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.567568  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:55.567574  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:55.567630  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:55.594478  528268 cri.go:89] found id: ""
	I1206 10:37:55.594491  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.594499  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:55.594505  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:55.594568  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:55.620118  528268 cri.go:89] found id: ""
	I1206 10:37:55.620132  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.620146  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:55.620151  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:55.620210  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:55.644692  528268 cri.go:89] found id: ""
	I1206 10:37:55.644706  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.644713  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:55.644721  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:55.644732  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:55.712056  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:55.702146   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.702755   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.704324   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.704667   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.708009   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:55.702146   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.702755   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.704324   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.704667   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.708009   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:55.712075  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:55.712085  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:55.782393  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:55.782414  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:55.817896  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:55.817913  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:55.892357  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:55.892385  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:58.407847  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:58.417968  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:58.418026  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:58.446859  528268 cri.go:89] found id: ""
	I1206 10:37:58.446872  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.446879  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:58.446884  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:58.446946  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:58.475161  528268 cri.go:89] found id: ""
	I1206 10:37:58.475175  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.475182  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:58.475187  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:58.475244  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:58.503498  528268 cri.go:89] found id: ""
	I1206 10:37:58.503513  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.503520  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:58.503525  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:58.503583  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:58.529955  528268 cri.go:89] found id: ""
	I1206 10:37:58.529970  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.529977  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:58.529983  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:58.530038  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:58.557174  528268 cri.go:89] found id: ""
	I1206 10:37:58.557188  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.557196  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:58.557201  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:58.557259  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:58.586116  528268 cri.go:89] found id: ""
	I1206 10:37:58.586130  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.586149  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:58.586156  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:58.586211  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:58.620339  528268 cri.go:89] found id: ""
	I1206 10:37:58.620353  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.620361  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:58.620368  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:58.620379  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:58.686086  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:58.686105  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:58.700471  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:58.700487  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:58.772759  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:58.764751   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.765482   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.767041   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.767492   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.769066   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:58.764751   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.765482   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.767041   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.767492   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.769066   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:58.772768  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:58.772779  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:58.841699  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:58.841718  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:01.372136  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:01.382712  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:01.382776  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:01.410577  528268 cri.go:89] found id: ""
	I1206 10:38:01.410591  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.410598  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:01.410603  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:01.410666  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:01.444228  528268 cri.go:89] found id: ""
	I1206 10:38:01.444251  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.444258  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:01.444264  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:01.444331  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:01.486632  528268 cri.go:89] found id: ""
	I1206 10:38:01.486645  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.486652  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:01.486657  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:01.486717  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:01.518190  528268 cri.go:89] found id: ""
	I1206 10:38:01.518203  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.518210  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:01.518215  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:01.518276  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:01.543942  528268 cri.go:89] found id: ""
	I1206 10:38:01.543956  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.543963  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:01.543968  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:01.544032  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:01.569769  528268 cri.go:89] found id: ""
	I1206 10:38:01.569803  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.569832  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:01.569845  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:01.569902  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:01.594441  528268 cri.go:89] found id: ""
	I1206 10:38:01.594456  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.594463  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:01.594471  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:01.594482  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:01.609124  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:01.609139  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:01.671291  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:01.663080   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.663834   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.665465   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.665773   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.667299   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:01.663080   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.663834   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.665465   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.665773   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.667299   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:01.671302  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:01.671312  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:01.739749  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:01.739769  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:01.768671  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:01.768687  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:04.339038  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:04.349363  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:04.349432  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:04.375032  528268 cri.go:89] found id: ""
	I1206 10:38:04.375045  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.375052  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:04.375058  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:04.375139  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:04.399997  528268 cri.go:89] found id: ""
	I1206 10:38:04.400011  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.400018  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:04.400023  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:04.400081  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:04.424851  528268 cri.go:89] found id: ""
	I1206 10:38:04.424876  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.424884  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:04.424889  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:04.424959  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:04.453149  528268 cri.go:89] found id: ""
	I1206 10:38:04.453162  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.453170  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:04.453175  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:04.453263  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:04.483514  528268 cri.go:89] found id: ""
	I1206 10:38:04.483527  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.483534  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:04.483540  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:04.483598  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:04.511967  528268 cri.go:89] found id: ""
	I1206 10:38:04.511980  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.511987  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:04.511993  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:04.512048  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:04.541164  528268 cri.go:89] found id: ""
	I1206 10:38:04.541175  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.541182  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:04.541190  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:04.541199  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:04.575975  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:04.575991  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:04.642763  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:04.642781  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:04.657313  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:04.657336  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:04.721928  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:04.713076   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.713820   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.715564   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.716200   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.717981   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:04.713076   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.713820   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.715564   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.716200   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.717981   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:04.721939  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:04.721952  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:07.293453  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:07.303645  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:07.303708  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:07.329285  528268 cri.go:89] found id: ""
	I1206 10:38:07.329299  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.329306  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:07.329313  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:07.329371  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:07.354889  528268 cri.go:89] found id: ""
	I1206 10:38:07.354903  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.354911  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:07.354916  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:07.354975  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:07.380496  528268 cri.go:89] found id: ""
	I1206 10:38:07.380510  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.380518  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:07.380523  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:07.380583  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:07.408252  528268 cri.go:89] found id: ""
	I1206 10:38:07.408265  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.408272  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:07.408278  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:07.408341  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:07.434563  528268 cri.go:89] found id: ""
	I1206 10:38:07.434577  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.434584  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:07.434590  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:07.434656  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:07.465668  528268 cri.go:89] found id: ""
	I1206 10:38:07.465681  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.465688  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:07.465694  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:07.465755  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:07.496206  528268 cri.go:89] found id: ""
	I1206 10:38:07.496220  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.496227  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:07.496252  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:07.496291  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:07.561228  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:07.561250  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:07.576434  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:07.576450  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:07.645534  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:07.637588   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.638151   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.639755   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.640208   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.641673   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:07.637588   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.638151   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.639755   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.640208   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.641673   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:07.645544  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:07.645555  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:07.713688  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:07.713708  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:10.250054  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:10.260518  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:10.260577  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:10.287264  528268 cri.go:89] found id: ""
	I1206 10:38:10.287283  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.287291  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:10.287296  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:10.287358  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:10.312333  528268 cri.go:89] found id: ""
	I1206 10:38:10.312347  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.312355  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:10.312360  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:10.312420  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:10.336978  528268 cri.go:89] found id: ""
	I1206 10:38:10.336993  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.337000  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:10.337004  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:10.337069  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:10.363441  528268 cri.go:89] found id: ""
	I1206 10:38:10.363455  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.363463  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:10.363468  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:10.363526  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:10.388225  528268 cri.go:89] found id: ""
	I1206 10:38:10.388245  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.388253  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:10.388259  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:10.388320  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:10.414362  528268 cri.go:89] found id: ""
	I1206 10:38:10.414375  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.414382  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:10.414388  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:10.414445  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:10.454478  528268 cri.go:89] found id: ""
	I1206 10:38:10.454491  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.454499  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:10.454508  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:10.454518  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:10.524830  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:10.524851  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:10.540277  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:10.540292  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:10.607931  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:10.599410   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.600137   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.601764   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.602052   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.604157   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:10.599410   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.600137   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.601764   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.602052   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.604157   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:10.607942  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:10.607955  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:10.675104  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:10.675134  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:13.206837  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:13.217943  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:13.218002  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:13.243670  528268 cri.go:89] found id: ""
	I1206 10:38:13.243684  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.243691  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:13.243697  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:13.243758  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:13.268428  528268 cri.go:89] found id: ""
	I1206 10:38:13.268443  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.268450  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:13.268455  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:13.268512  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:13.294024  528268 cri.go:89] found id: ""
	I1206 10:38:13.294038  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.294045  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:13.294050  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:13.294106  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:13.321522  528268 cri.go:89] found id: ""
	I1206 10:38:13.321536  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.321543  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:13.321548  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:13.321610  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:13.351214  528268 cri.go:89] found id: ""
	I1206 10:38:13.351228  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.351235  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:13.351240  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:13.351299  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:13.376433  528268 cri.go:89] found id: ""
	I1206 10:38:13.376447  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.376454  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:13.376459  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:13.376520  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:13.405980  528268 cri.go:89] found id: ""
	I1206 10:38:13.405994  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.406001  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:13.406009  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:13.406019  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:13.481314  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:13.481334  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:13.503361  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:13.503378  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:13.570756  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:13.562069   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.562777   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.564575   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.565306   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.566790   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:13.562069   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.562777   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.564575   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.565306   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.566790   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:13.570765  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:13.570778  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:13.641258  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:13.641282  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:16.171913  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:16.182483  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:16.182545  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:16.210129  528268 cri.go:89] found id: ""
	I1206 10:38:16.210143  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.210151  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:16.210156  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:16.210217  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:16.237040  528268 cri.go:89] found id: ""
	I1206 10:38:16.237060  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.237067  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:16.237073  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:16.237134  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:16.263801  528268 cri.go:89] found id: ""
	I1206 10:38:16.263815  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.263822  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:16.263827  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:16.263886  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:16.289263  528268 cri.go:89] found id: ""
	I1206 10:38:16.289277  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.289284  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:16.289289  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:16.289347  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:16.317849  528268 cri.go:89] found id: ""
	I1206 10:38:16.317862  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.317870  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:16.317875  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:16.317933  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:16.347303  528268 cri.go:89] found id: ""
	I1206 10:38:16.347317  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.347324  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:16.347329  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:16.347387  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:16.373512  528268 cri.go:89] found id: ""
	I1206 10:38:16.373525  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.373542  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:16.373552  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:16.373568  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:16.438751  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:16.438769  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:16.455447  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:16.455463  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:16.527176  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:16.518992   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.519800   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.521522   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.522056   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.523116   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:16.518992   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.519800   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.521522   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.522056   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.523116   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:16.527186  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:16.527196  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:16.595033  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:16.595053  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:19.127162  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:19.137626  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:19.137685  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:19.168715  528268 cri.go:89] found id: ""
	I1206 10:38:19.168729  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.168736  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:19.168741  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:19.168798  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:19.199324  528268 cri.go:89] found id: ""
	I1206 10:38:19.199341  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.199354  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:19.199359  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:19.199418  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:19.225589  528268 cri.go:89] found id: ""
	I1206 10:38:19.225601  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.225608  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:19.225613  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:19.225670  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:19.251399  528268 cri.go:89] found id: ""
	I1206 10:38:19.251412  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.251420  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:19.251425  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:19.251488  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:19.276108  528268 cri.go:89] found id: ""
	I1206 10:38:19.276122  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.276129  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:19.276134  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:19.276193  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:19.301269  528268 cri.go:89] found id: ""
	I1206 10:38:19.301282  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.301290  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:19.301295  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:19.301352  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:19.327537  528268 cri.go:89] found id: ""
	I1206 10:38:19.327552  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.327559  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:19.327568  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:19.327578  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:19.398088  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:19.398114  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:19.413590  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:19.413609  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:19.517843  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:19.509322   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.509746   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.511448   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.511962   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.513543   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:19.509322   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.509746   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.511448   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.511962   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.513543   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:19.517853  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:19.517866  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:19.587464  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:19.587485  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:22.115984  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:22.126048  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:22.126111  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:22.152880  528268 cri.go:89] found id: ""
	I1206 10:38:22.152893  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.152900  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:22.152905  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:22.152961  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:22.179175  528268 cri.go:89] found id: ""
	I1206 10:38:22.179190  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.179197  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:22.179202  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:22.179263  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:22.204543  528268 cri.go:89] found id: ""
	I1206 10:38:22.204557  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.204565  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:22.204570  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:22.204631  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:22.229269  528268 cri.go:89] found id: ""
	I1206 10:38:22.229283  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.229291  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:22.229296  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:22.229353  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:22.255404  528268 cri.go:89] found id: ""
	I1206 10:38:22.255418  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.255425  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:22.255430  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:22.255488  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:22.280965  528268 cri.go:89] found id: ""
	I1206 10:38:22.280981  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.280988  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:22.280994  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:22.281052  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:22.309901  528268 cri.go:89] found id: ""
	I1206 10:38:22.309915  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.309922  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:22.309930  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:22.309940  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:22.382110  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:22.382130  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:22.412045  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:22.412060  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:22.485902  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:22.485921  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:22.501637  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:22.501655  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:22.572937  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:22.565172   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.565547   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.567025   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.567515   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.569137   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:22.565172   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.565547   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.567025   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.567515   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.569137   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:25.074598  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:25.085017  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:25.085084  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:25.110479  528268 cri.go:89] found id: ""
	I1206 10:38:25.110493  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.110500  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:25.110506  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:25.110566  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:25.137467  528268 cri.go:89] found id: ""
	I1206 10:38:25.137481  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.137488  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:25.137493  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:25.137552  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:25.163017  528268 cri.go:89] found id: ""
	I1206 10:38:25.163033  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.163040  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:25.163046  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:25.163105  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:25.193876  528268 cri.go:89] found id: ""
	I1206 10:38:25.193890  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.193898  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:25.193903  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:25.193966  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:25.220362  528268 cri.go:89] found id: ""
	I1206 10:38:25.220376  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.220383  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:25.220388  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:25.220444  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:25.246057  528268 cri.go:89] found id: ""
	I1206 10:38:25.246070  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.246078  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:25.246083  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:25.246140  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:25.273646  528268 cri.go:89] found id: ""
	I1206 10:38:25.273660  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.273667  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:25.273675  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:25.273691  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:25.341507  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:25.341527  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:25.356890  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:25.356906  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:25.432607  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:25.423528   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.424336   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.425943   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.426718   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.428396   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:25.423528   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.424336   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.425943   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.426718   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.428396   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:25.432617  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:25.432628  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:25.515030  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:25.515052  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:28.053670  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:28.064577  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:28.064641  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:28.091082  528268 cri.go:89] found id: ""
	I1206 10:38:28.091097  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.091106  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:28.091111  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:28.091205  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:28.116793  528268 cri.go:89] found id: ""
	I1206 10:38:28.116808  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.116815  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:28.116822  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:28.116881  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:28.145938  528268 cri.go:89] found id: ""
	I1206 10:38:28.145952  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.145960  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:28.145965  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:28.146025  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:28.171742  528268 cri.go:89] found id: ""
	I1206 10:38:28.171755  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.171763  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:28.171768  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:28.171826  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:28.197528  528268 cri.go:89] found id: ""
	I1206 10:38:28.197542  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.197549  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:28.197554  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:28.197613  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:28.224277  528268 cri.go:89] found id: ""
	I1206 10:38:28.224291  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.224298  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:28.224303  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:28.224368  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:28.252201  528268 cri.go:89] found id: ""
	I1206 10:38:28.252215  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.252223  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:28.252237  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:28.252248  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:28.284626  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:28.284642  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:28.351035  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:28.351055  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:28.366043  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:28.366061  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:28.437473  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:28.427946   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.428958   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.430082   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.430867   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.432711   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:28.427946   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.428958   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.430082   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.430867   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.432711   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:28.437483  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:28.437506  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:31.019982  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:31.030426  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:31.030488  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:31.055406  528268 cri.go:89] found id: ""
	I1206 10:38:31.055419  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.055427  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:31.055432  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:31.055490  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:31.081639  528268 cri.go:89] found id: ""
	I1206 10:38:31.081653  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.081660  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:31.081665  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:31.081729  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:31.111871  528268 cri.go:89] found id: ""
	I1206 10:38:31.111886  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.111894  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:31.111899  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:31.111959  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:31.142949  528268 cri.go:89] found id: ""
	I1206 10:38:31.142964  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.142971  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:31.142977  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:31.143042  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:31.169930  528268 cri.go:89] found id: ""
	I1206 10:38:31.169946  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.169954  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:31.169959  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:31.170020  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:31.196019  528268 cri.go:89] found id: ""
	I1206 10:38:31.196033  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.196041  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:31.196046  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:31.196104  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:31.226526  528268 cri.go:89] found id: ""
	I1206 10:38:31.226540  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.226547  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:31.226556  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:31.226567  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:31.289723  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:31.280542   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.281325   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.283214   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.283972   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.285734   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:31.280542   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.281325   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.283214   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.283972   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.285734   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:31.289733  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:31.289746  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:31.358922  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:31.358941  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:31.387252  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:31.387268  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:31.460730  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:31.460749  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:33.977403  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:33.987866  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:33.987933  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:34.023637  528268 cri.go:89] found id: ""
	I1206 10:38:34.023651  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.023659  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:34.023664  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:34.023728  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:34.052242  528268 cri.go:89] found id: ""
	I1206 10:38:34.052256  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.052263  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:34.052269  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:34.052330  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:34.077707  528268 cri.go:89] found id: ""
	I1206 10:38:34.077721  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.077728  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:34.077734  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:34.077795  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:34.103066  528268 cri.go:89] found id: ""
	I1206 10:38:34.103079  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.103098  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:34.103103  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:34.103185  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:34.132994  528268 cri.go:89] found id: ""
	I1206 10:38:34.133007  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.133015  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:34.133020  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:34.133081  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:34.159017  528268 cri.go:89] found id: ""
	I1206 10:38:34.159030  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.159038  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:34.159043  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:34.159101  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:34.185998  528268 cri.go:89] found id: ""
	I1206 10:38:34.186012  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.186020  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:34.186028  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:34.186042  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:34.257644  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:34.257664  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:34.273073  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:34.273092  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:34.344235  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:34.334637   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.335521   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.337181   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.337760   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.339605   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:34.334637   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.335521   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.337181   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.337760   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.339605   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:34.344247  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:34.344260  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:34.414848  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:34.414867  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:36.966180  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:36.976392  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:36.976457  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:37.002549  528268 cri.go:89] found id: ""
	I1206 10:38:37.002566  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.002574  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:37.002580  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:37.002657  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:37.033009  528268 cri.go:89] found id: ""
	I1206 10:38:37.033024  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.033031  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:37.033037  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:37.033106  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:37.059257  528268 cri.go:89] found id: ""
	I1206 10:38:37.059271  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.059279  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:37.059285  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:37.059346  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:37.090436  528268 cri.go:89] found id: ""
	I1206 10:38:37.090449  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.090457  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:37.090462  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:37.090523  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:37.118194  528268 cri.go:89] found id: ""
	I1206 10:38:37.118208  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.118215  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:37.118222  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:37.118284  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:37.144022  528268 cri.go:89] found id: ""
	I1206 10:38:37.144036  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.144044  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:37.144049  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:37.144107  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:37.168416  528268 cri.go:89] found id: ""
	I1206 10:38:37.168430  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.168438  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:37.168445  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:37.168456  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:37.234878  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:37.234898  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:37.250351  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:37.250374  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:37.316139  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:37.307238   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.308163   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.309976   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.310399   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.312153   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:37.307238   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.308163   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.309976   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.310399   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.312153   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:37.316149  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:37.316159  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:37.385780  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:37.385800  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:39.916327  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:39.926345  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:39.926412  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:39.953639  528268 cri.go:89] found id: ""
	I1206 10:38:39.953652  528268 logs.go:282] 0 containers: []
	W1206 10:38:39.953660  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:39.953671  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:39.953732  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:39.979049  528268 cri.go:89] found id: ""
	I1206 10:38:39.979064  528268 logs.go:282] 0 containers: []
	W1206 10:38:39.979072  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:39.979077  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:39.979164  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:40.013684  528268 cri.go:89] found id: ""
	I1206 10:38:40.013700  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.013708  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:40.013714  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:40.013783  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:40.052804  528268 cri.go:89] found id: ""
	I1206 10:38:40.052820  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.052828  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:40.052834  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:40.052902  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:40.084356  528268 cri.go:89] found id: ""
	I1206 10:38:40.084372  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.084380  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:40.084386  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:40.084451  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:40.112282  528268 cri.go:89] found id: ""
	I1206 10:38:40.112297  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.112304  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:40.112312  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:40.112373  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:40.140065  528268 cri.go:89] found id: ""
	I1206 10:38:40.140080  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.140087  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:40.140094  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:40.140108  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:40.208521  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:40.199450   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.200296   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.202102   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.202795   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.204574   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:40.199450   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.200296   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.202102   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.202795   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.204574   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:40.208530  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:40.208541  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:40.280105  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:40.280126  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:40.313393  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:40.313409  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:40.380769  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:40.380789  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:42.896735  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:42.906913  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:42.906971  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:42.932466  528268 cri.go:89] found id: ""
	I1206 10:38:42.932480  528268 logs.go:282] 0 containers: []
	W1206 10:38:42.932493  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:42.932499  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:42.932560  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:42.962618  528268 cri.go:89] found id: ""
	I1206 10:38:42.962633  528268 logs.go:282] 0 containers: []
	W1206 10:38:42.962641  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:42.962647  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:42.962704  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:42.989497  528268 cri.go:89] found id: ""
	I1206 10:38:42.989511  528268 logs.go:282] 0 containers: []
	W1206 10:38:42.989519  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:42.989525  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:42.989581  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:43.016798  528268 cri.go:89] found id: ""
	I1206 10:38:43.016818  528268 logs.go:282] 0 containers: []
	W1206 10:38:43.016825  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:43.016831  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:43.017042  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:43.044571  528268 cri.go:89] found id: ""
	I1206 10:38:43.044589  528268 logs.go:282] 0 containers: []
	W1206 10:38:43.044599  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:43.044606  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:43.044679  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:43.072240  528268 cri.go:89] found id: ""
	I1206 10:38:43.072256  528268 logs.go:282] 0 containers: []
	W1206 10:38:43.072264  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:43.072269  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:43.072330  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:43.098196  528268 cri.go:89] found id: ""
	I1206 10:38:43.098211  528268 logs.go:282] 0 containers: []
	W1206 10:38:43.098218  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:43.098225  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:43.098237  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:43.113559  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:43.113577  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:43.177585  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:43.169460   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.169877   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.171569   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.172135   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.173643   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:43.169460   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.169877   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.171569   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.172135   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.173643   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:43.177595  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:43.177606  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:43.251189  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:43.251210  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:43.278658  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:43.278673  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:45.849509  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:45.861204  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:45.861266  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:45.888209  528268 cri.go:89] found id: ""
	I1206 10:38:45.888228  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.888236  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:45.888241  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:45.888306  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:45.913344  528268 cri.go:89] found id: ""
	I1206 10:38:45.913357  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.913365  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:45.913370  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:45.913429  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:45.939830  528268 cri.go:89] found id: ""
	I1206 10:38:45.939844  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.939852  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:45.939857  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:45.939927  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:45.964893  528268 cri.go:89] found id: ""
	I1206 10:38:45.964907  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.964914  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:45.964920  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:45.964984  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:45.991528  528268 cri.go:89] found id: ""
	I1206 10:38:45.991540  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.991548  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:45.991553  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:45.991614  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:46.018162  528268 cri.go:89] found id: ""
	I1206 10:38:46.018176  528268 logs.go:282] 0 containers: []
	W1206 10:38:46.018184  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:46.018190  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:46.018249  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:46.045784  528268 cri.go:89] found id: ""
	I1206 10:38:46.045807  528268 logs.go:282] 0 containers: []
	W1206 10:38:46.045814  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:46.045822  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:46.045833  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:46.114786  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:46.105174   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.106040   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.107658   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.108307   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.110017   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:46.105174   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.106040   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.107658   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.108307   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.110017   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:46.114796  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:46.114808  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:46.185171  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:46.185193  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:46.213442  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:46.213458  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:46.280354  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:46.280374  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:48.796511  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:48.807012  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:48.807073  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:48.832313  528268 cri.go:89] found id: ""
	I1206 10:38:48.832337  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.832344  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:48.832349  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:48.832420  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:48.857914  528268 cri.go:89] found id: ""
	I1206 10:38:48.857928  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.857935  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:48.857940  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:48.858000  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:48.887721  528268 cri.go:89] found id: ""
	I1206 10:38:48.887735  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.887743  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:48.887748  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:48.887808  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:48.912329  528268 cri.go:89] found id: ""
	I1206 10:38:48.912343  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.912351  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:48.912356  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:48.912416  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:48.942323  528268 cri.go:89] found id: ""
	I1206 10:38:48.942337  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.942344  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:48.942349  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:48.942408  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:48.971776  528268 cri.go:89] found id: ""
	I1206 10:38:48.971790  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.971798  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:48.971803  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:48.971861  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:48.997054  528268 cri.go:89] found id: ""
	I1206 10:38:48.997068  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.997076  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:48.997084  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:48.997095  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:49.071387  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:49.071413  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:49.099724  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:49.099743  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:49.165471  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:49.165492  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:49.180707  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:49.180755  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:49.246459  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:49.238180   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.239038   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.240759   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.241079   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.242605   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:49.238180   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.239038   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.240759   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.241079   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.242605   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:51.747477  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:51.757424  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:51.757483  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:51.785368  528268 cri.go:89] found id: ""
	I1206 10:38:51.785382  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.785390  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:51.785395  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:51.785452  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:51.814468  528268 cri.go:89] found id: ""
	I1206 10:38:51.814482  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.814489  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:51.814494  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:51.814553  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:51.839897  528268 cri.go:89] found id: ""
	I1206 10:38:51.839911  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.839918  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:51.839923  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:51.839980  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:51.865924  528268 cri.go:89] found id: ""
	I1206 10:38:51.865938  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.865951  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:51.865956  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:51.866011  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:51.891688  528268 cri.go:89] found id: ""
	I1206 10:38:51.891702  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.891709  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:51.891714  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:51.891772  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:51.917048  528268 cri.go:89] found id: ""
	I1206 10:38:51.917062  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.917070  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:51.917075  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:51.917132  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:51.942873  528268 cri.go:89] found id: ""
	I1206 10:38:51.942888  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.942895  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:51.942903  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:51.942914  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:52.011199  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:52.001318   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.002485   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.003254   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.005112   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.005720   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:52.001318   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.002485   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.003254   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.005112   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.005720   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:52.011209  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:52.011220  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:52.085464  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:52.085485  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:52.119213  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:52.119230  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:52.189731  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:52.189751  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:54.705436  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:54.717135  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:54.717196  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:54.755081  528268 cri.go:89] found id: ""
	I1206 10:38:54.755095  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.755105  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:54.755110  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:54.755209  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:54.780971  528268 cri.go:89] found id: ""
	I1206 10:38:54.780985  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.780993  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:54.780998  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:54.781060  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:54.806877  528268 cri.go:89] found id: ""
	I1206 10:38:54.806891  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.806898  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:54.806904  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:54.806967  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:54.832627  528268 cri.go:89] found id: ""
	I1206 10:38:54.832641  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.832649  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:54.832654  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:54.832711  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:54.857814  528268 cri.go:89] found id: ""
	I1206 10:38:54.857828  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.857836  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:54.857841  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:54.857897  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:54.883738  528268 cri.go:89] found id: ""
	I1206 10:38:54.883752  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.883759  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:54.883764  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:54.883821  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:54.909479  528268 cri.go:89] found id: ""
	I1206 10:38:54.909493  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.909500  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:54.909508  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:54.909519  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:54.975629  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:54.975651  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:54.991150  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:54.991166  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:55.064619  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:55.054168   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.054825   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.058121   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.058810   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.060748   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:55.054168   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.054825   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.058121   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.058810   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.060748   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:55.064628  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:55.064639  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:55.134387  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:55.134406  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:57.664428  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:57.675264  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:57.675328  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:57.709021  528268 cri.go:89] found id: ""
	I1206 10:38:57.709035  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.709043  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:57.709048  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:57.709116  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:57.744132  528268 cri.go:89] found id: ""
	I1206 10:38:57.744146  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.744153  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:57.744159  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:57.744226  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:57.778746  528268 cri.go:89] found id: ""
	I1206 10:38:57.778760  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.778767  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:57.778772  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:57.778829  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:57.805263  528268 cri.go:89] found id: ""
	I1206 10:38:57.805276  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.805284  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:57.805289  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:57.805348  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:57.831152  528268 cri.go:89] found id: ""
	I1206 10:38:57.831166  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.831173  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:57.831178  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:57.831240  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:57.857097  528268 cri.go:89] found id: ""
	I1206 10:38:57.857111  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.857119  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:57.857124  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:57.857189  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:57.882945  528268 cri.go:89] found id: ""
	I1206 10:38:57.882984  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.882992  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:57.883000  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:57.883011  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:57.915176  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:57.915193  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:57.981939  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:57.981958  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:57.997358  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:57.997373  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:58.070527  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:58.061092   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.061631   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.063614   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.064325   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.065286   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:58.061092   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.061631   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.063614   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.064325   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.065286   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:58.070538  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:58.070549  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:00.641789  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:00.651800  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:00.651859  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:00.679593  528268 cri.go:89] found id: ""
	I1206 10:39:00.679606  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.679613  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:00.679618  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:00.679673  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:00.712252  528268 cri.go:89] found id: ""
	I1206 10:39:00.712266  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.712273  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:00.712278  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:00.712337  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:00.746867  528268 cri.go:89] found id: ""
	I1206 10:39:00.746881  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.746888  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:00.746894  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:00.746954  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:00.779153  528268 cri.go:89] found id: ""
	I1206 10:39:00.779167  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.779174  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:00.779180  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:00.779241  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:00.805143  528268 cri.go:89] found id: ""
	I1206 10:39:00.805157  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.805164  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:00.805170  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:00.805227  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:00.831339  528268 cri.go:89] found id: ""
	I1206 10:39:00.831353  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.831361  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:00.831368  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:00.831430  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:00.857571  528268 cri.go:89] found id: ""
	I1206 10:39:00.857585  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.857593  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:00.857600  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:00.857611  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:00.925179  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:00.917222   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.917610   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.919217   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.919688   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.921308   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:00.917222   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.917610   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.919217   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.919688   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.921308   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:00.925189  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:00.925200  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:00.994191  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:00.994210  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:01.029067  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:01.029085  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:01.100689  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:01.100709  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:03.616374  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:03.626603  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:03.626714  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:03.651732  528268 cri.go:89] found id: ""
	I1206 10:39:03.651746  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.651753  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:03.651758  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:03.651818  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:03.679359  528268 cri.go:89] found id: ""
	I1206 10:39:03.679373  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.679380  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:03.679385  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:03.679442  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:03.714610  528268 cri.go:89] found id: ""
	I1206 10:39:03.714624  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.714631  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:03.714636  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:03.714693  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:03.745765  528268 cri.go:89] found id: ""
	I1206 10:39:03.745780  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.745787  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:03.745792  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:03.745849  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:03.771225  528268 cri.go:89] found id: ""
	I1206 10:39:03.771239  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.771247  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:03.771252  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:03.771316  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:03.796796  528268 cri.go:89] found id: ""
	I1206 10:39:03.796853  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.796861  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:03.796867  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:03.796925  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:03.822839  528268 cri.go:89] found id: ""
	I1206 10:39:03.822853  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.822861  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:03.822878  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:03.822888  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:03.858844  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:03.858860  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:03.925683  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:03.925703  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:03.941280  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:03.941297  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:04.009034  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:03.997692   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:03.998374   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.001181   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.001673   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.003993   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:03.997692   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:03.998374   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.001181   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.001673   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.003993   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:04.009044  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:04.009055  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:06.582354  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:06.592267  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:06.592340  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:06.617889  528268 cri.go:89] found id: ""
	I1206 10:39:06.617902  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.617909  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:06.617915  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:06.617979  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:06.643951  528268 cri.go:89] found id: ""
	I1206 10:39:06.643966  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.643973  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:06.643978  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:06.644035  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:06.669753  528268 cri.go:89] found id: ""
	I1206 10:39:06.669767  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.669774  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:06.669779  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:06.669839  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:06.701353  528268 cri.go:89] found id: ""
	I1206 10:39:06.701373  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.701380  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:06.701386  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:06.701445  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:06.751930  528268 cri.go:89] found id: ""
	I1206 10:39:06.751944  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.751952  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:06.751956  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:06.752019  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:06.778713  528268 cri.go:89] found id: ""
	I1206 10:39:06.778727  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.778734  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:06.778741  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:06.778802  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:06.804251  528268 cri.go:89] found id: ""
	I1206 10:39:06.804265  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.804273  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:06.804280  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:06.804290  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:06.871350  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:06.871368  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:06.885942  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:06.885960  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:06.959058  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:06.950158   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.951219   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.951835   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.953474   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.954070   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:06.950158   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.951219   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.951835   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.953474   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.954070   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:06.959068  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:06.959081  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:07.030114  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:07.030135  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:09.559397  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:09.569971  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:09.570039  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:09.595039  528268 cri.go:89] found id: ""
	I1206 10:39:09.595052  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.595059  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:09.595065  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:09.595152  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:09.621113  528268 cri.go:89] found id: ""
	I1206 10:39:09.621127  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.621135  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:09.621140  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:09.621203  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:09.651003  528268 cri.go:89] found id: ""
	I1206 10:39:09.651016  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.651024  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:09.651029  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:09.651087  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:09.677104  528268 cri.go:89] found id: ""
	I1206 10:39:09.677118  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.677125  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:09.677131  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:09.677187  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:09.713565  528268 cri.go:89] found id: ""
	I1206 10:39:09.713579  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.713587  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:09.713592  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:09.713653  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:09.741915  528268 cri.go:89] found id: ""
	I1206 10:39:09.741928  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.741935  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:09.741941  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:09.741997  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:09.774013  528268 cri.go:89] found id: ""
	I1206 10:39:09.774027  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.774035  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:09.774042  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:09.774054  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:09.840091  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:09.840113  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:09.855657  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:09.855675  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:09.919867  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:09.911210   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.911783   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.913473   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.914124   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.915891   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:09.911210   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.911783   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.913473   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.914124   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.915891   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:09.919877  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:09.919901  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:09.991592  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:09.991613  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:12.526559  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:12.537148  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:12.537208  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:12.570214  528268 cri.go:89] found id: ""
	I1206 10:39:12.570228  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.570235  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:12.570241  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:12.570299  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:12.595309  528268 cri.go:89] found id: ""
	I1206 10:39:12.595324  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.595331  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:12.595342  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:12.595401  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:12.620408  528268 cri.go:89] found id: ""
	I1206 10:39:12.620422  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.620429  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:12.620434  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:12.620495  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:12.645606  528268 cri.go:89] found id: ""
	I1206 10:39:12.645621  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.645628  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:12.645644  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:12.645700  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:12.672105  528268 cri.go:89] found id: ""
	I1206 10:39:12.672119  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.672126  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:12.672132  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:12.672191  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:12.699949  528268 cri.go:89] found id: ""
	I1206 10:39:12.699964  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.699971  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:12.699976  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:12.700038  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:12.730867  528268 cri.go:89] found id: ""
	I1206 10:39:12.730881  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.730888  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:12.730896  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:12.730907  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:12.760666  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:12.760682  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:12.827918  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:12.827939  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:12.845229  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:12.845250  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:12.913571  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:12.905225   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.906413   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.907377   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.908192   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.909739   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:12.905225   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.906413   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.907377   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.908192   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.909739   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:12.913582  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:12.913606  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:15.486285  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:15.496339  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:15.496397  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:15.522751  528268 cri.go:89] found id: ""
	I1206 10:39:15.522765  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.522773  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:15.522782  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:15.522842  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:15.548733  528268 cri.go:89] found id: ""
	I1206 10:39:15.548747  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.548760  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:15.548765  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:15.548823  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:15.574392  528268 cri.go:89] found id: ""
	I1206 10:39:15.574406  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.574413  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:15.574418  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:15.574475  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:15.600281  528268 cri.go:89] found id: ""
	I1206 10:39:15.600297  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.600311  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:15.600316  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:15.600376  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:15.626469  528268 cri.go:89] found id: ""
	I1206 10:39:15.626482  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.626490  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:15.626496  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:15.626561  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:15.652394  528268 cri.go:89] found id: ""
	I1206 10:39:15.652407  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.652414  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:15.652420  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:15.652477  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:15.679527  528268 cri.go:89] found id: ""
	I1206 10:39:15.679540  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.679553  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:15.679561  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:15.679571  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:15.764342  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:15.764363  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:15.798376  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:15.798394  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:15.868665  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:15.868685  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:15.883983  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:15.883999  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:15.952342  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:15.944348   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.945157   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.946732   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.947077   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.948583   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:15.944348   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.945157   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.946732   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.947077   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.948583   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:18.453493  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:18.463876  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:18.463935  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:18.490209  528268 cri.go:89] found id: ""
	I1206 10:39:18.490224  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.490231  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:18.490236  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:18.490294  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:18.516967  528268 cri.go:89] found id: ""
	I1206 10:39:18.516981  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.516988  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:18.516993  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:18.517054  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:18.546169  528268 cri.go:89] found id: ""
	I1206 10:39:18.546182  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.546189  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:18.546194  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:18.546253  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:18.571307  528268 cri.go:89] found id: ""
	I1206 10:39:18.571320  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.571327  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:18.571333  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:18.571391  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:18.596842  528268 cri.go:89] found id: ""
	I1206 10:39:18.596856  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.596863  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:18.596868  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:18.596924  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:18.622545  528268 cri.go:89] found id: ""
	I1206 10:39:18.622559  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.622566  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:18.622571  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:18.622628  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:18.647866  528268 cri.go:89] found id: ""
	I1206 10:39:18.647879  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.647886  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:18.647894  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:18.647904  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:18.722841  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:18.722867  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:18.738489  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:18.738506  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:18.804503  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:18.796653   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.797155   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.798686   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.799110   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.800626   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:18.796653   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.797155   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.798686   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.799110   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.800626   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:18.804514  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:18.804527  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:18.873502  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:18.873520  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:21.404064  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:21.414555  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:21.414615  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:21.439357  528268 cri.go:89] found id: ""
	I1206 10:39:21.439371  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.439378  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:21.439384  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:21.439444  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:21.464257  528268 cri.go:89] found id: ""
	I1206 10:39:21.464270  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.464278  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:21.464283  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:21.464342  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:21.489051  528268 cri.go:89] found id: ""
	I1206 10:39:21.489065  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.489072  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:21.489077  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:21.489133  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:21.514898  528268 cri.go:89] found id: ""
	I1206 10:39:21.514912  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.514919  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:21.514930  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:21.514988  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:21.540268  528268 cri.go:89] found id: ""
	I1206 10:39:21.540283  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.540290  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:21.540296  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:21.540361  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:21.564943  528268 cri.go:89] found id: ""
	I1206 10:39:21.564957  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.564965  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:21.564970  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:21.565031  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:21.590819  528268 cri.go:89] found id: ""
	I1206 10:39:21.590833  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.590840  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:21.590848  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:21.590858  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:21.656247  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:21.647267   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.648092   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.649642   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.650214   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.652120   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:21.647267   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.648092   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.649642   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.650214   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.652120   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:21.656258  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:21.656268  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:21.726649  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:21.726669  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:21.757883  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:21.757900  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:21.827592  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:21.827612  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:24.344952  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:24.355567  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:24.355629  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:24.381792  528268 cri.go:89] found id: ""
	I1206 10:39:24.381806  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.381814  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:24.381819  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:24.381880  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:24.406752  528268 cri.go:89] found id: ""
	I1206 10:39:24.406766  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.406773  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:24.406779  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:24.406837  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:24.435444  528268 cri.go:89] found id: ""
	I1206 10:39:24.435458  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.435466  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:24.435471  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:24.435537  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:24.460261  528268 cri.go:89] found id: ""
	I1206 10:39:24.460275  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.460282  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:24.460287  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:24.460344  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:24.485676  528268 cri.go:89] found id: ""
	I1206 10:39:24.485689  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.485697  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:24.485702  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:24.485758  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:24.515674  528268 cri.go:89] found id: ""
	I1206 10:39:24.515689  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.515696  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:24.515702  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:24.515759  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:24.540533  528268 cri.go:89] found id: ""
	I1206 10:39:24.540547  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.540555  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:24.540563  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:24.540573  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:24.607514  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:24.607536  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:24.622495  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:24.622512  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:24.688734  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:24.679787   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.680616   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.681733   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.682450   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.684164   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:24.679787   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.680616   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.681733   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.682450   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.684164   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:24.688745  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:24.688755  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:24.767851  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:24.767871  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:27.298384  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:27.308520  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:27.308577  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:27.337406  528268 cri.go:89] found id: ""
	I1206 10:39:27.337421  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.337429  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:27.337434  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:27.337492  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:27.363616  528268 cri.go:89] found id: ""
	I1206 10:39:27.363630  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.363637  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:27.363643  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:27.363700  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:27.387807  528268 cri.go:89] found id: ""
	I1206 10:39:27.387821  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.387828  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:27.387833  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:27.387892  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:27.417047  528268 cri.go:89] found id: ""
	I1206 10:39:27.417061  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.417068  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:27.417076  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:27.417135  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:27.443034  528268 cri.go:89] found id: ""
	I1206 10:39:27.443047  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.443055  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:27.443060  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:27.443156  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:27.469276  528268 cri.go:89] found id: ""
	I1206 10:39:27.469289  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.469297  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:27.469302  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:27.469361  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:27.494605  528268 cri.go:89] found id: ""
	I1206 10:39:27.494619  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.494626  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:27.494634  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:27.494681  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:27.522899  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:27.522916  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:27.593447  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:27.593467  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:27.608920  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:27.608937  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:27.673774  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:27.665376   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.666067   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.667656   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.668260   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.669814   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:27.665376   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.666067   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.667656   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.668260   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.669814   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:27.673784  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:27.673795  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:30.246836  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:30.257118  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:30.257181  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:30.285905  528268 cri.go:89] found id: ""
	I1206 10:39:30.285918  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.285926  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:30.285931  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:30.285991  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:30.312233  528268 cri.go:89] found id: ""
	I1206 10:39:30.312247  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.312254  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:30.312259  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:30.312320  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:30.342032  528268 cri.go:89] found id: ""
	I1206 10:39:30.342047  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.342061  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:30.342066  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:30.342127  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:30.371021  528268 cri.go:89] found id: ""
	I1206 10:39:30.371051  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.371059  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:30.371064  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:30.371145  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:30.397540  528268 cri.go:89] found id: ""
	I1206 10:39:30.397554  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.397561  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:30.397566  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:30.397625  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:30.424004  528268 cri.go:89] found id: ""
	I1206 10:39:30.424018  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.424026  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:30.424033  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:30.424090  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:30.450313  528268 cri.go:89] found id: ""
	I1206 10:39:30.450327  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.450335  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:30.450342  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:30.450352  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:30.516474  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:30.516493  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:30.532143  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:30.532160  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:30.595585  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:30.587952   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.588400   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.589883   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.590195   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.591620   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:30.587952   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.588400   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.589883   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.590195   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.591620   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:30.595595  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:30.595606  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:30.664167  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:30.664186  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:33.200924  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:33.211672  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:33.211735  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:33.237137  528268 cri.go:89] found id: ""
	I1206 10:39:33.237151  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.237159  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:33.237165  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:33.237265  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:33.263318  528268 cri.go:89] found id: ""
	I1206 10:39:33.263332  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.263339  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:33.263345  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:33.263403  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:33.292810  528268 cri.go:89] found id: ""
	I1206 10:39:33.292824  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.292832  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:33.292837  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:33.292902  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:33.322280  528268 cri.go:89] found id: ""
	I1206 10:39:33.322294  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.322302  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:33.322307  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:33.322371  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:33.347371  528268 cri.go:89] found id: ""
	I1206 10:39:33.347384  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.347391  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:33.347397  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:33.347454  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:33.373452  528268 cri.go:89] found id: ""
	I1206 10:39:33.373465  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.373473  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:33.373478  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:33.373536  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:33.398875  528268 cri.go:89] found id: ""
	I1206 10:39:33.398895  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.398902  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:33.398910  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:33.398921  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:33.465783  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:33.465803  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:33.480960  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:33.480977  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:33.548139  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:33.539389   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.540163   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.541972   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.542561   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.544286   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:33.539389   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.540163   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.541972   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.542561   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.544286   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:33.548148  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:33.548158  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:33.617390  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:33.617412  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:36.152703  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:36.162988  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:36.163052  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:36.188586  528268 cri.go:89] found id: ""
	I1206 10:39:36.188599  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.188607  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:36.188611  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:36.188670  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:36.213361  528268 cri.go:89] found id: ""
	I1206 10:39:36.213374  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.213383  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:36.213388  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:36.213445  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:36.239271  528268 cri.go:89] found id: ""
	I1206 10:39:36.239285  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.239292  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:36.239297  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:36.239357  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:36.265679  528268 cri.go:89] found id: ""
	I1206 10:39:36.265695  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.265702  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:36.265707  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:36.265766  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:36.295654  528268 cri.go:89] found id: ""
	I1206 10:39:36.295668  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.295675  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:36.295681  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:36.295739  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:36.323853  528268 cri.go:89] found id: ""
	I1206 10:39:36.323874  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.323881  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:36.323887  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:36.323950  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:36.348624  528268 cri.go:89] found id: ""
	I1206 10:39:36.348639  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.348646  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:36.348654  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:36.348665  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:36.363245  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:36.363261  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:36.427550  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:36.419105   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.419825   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.421548   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.422073   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.423577   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:36.419105   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.419825   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.421548   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.422073   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.423577   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:36.427562  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:36.427573  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:36.495925  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:36.495943  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:36.524935  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:36.524952  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:39.092735  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:39.102812  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:39.102870  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:39.129292  528268 cri.go:89] found id: ""
	I1206 10:39:39.129306  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.129313  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:39.129318  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:39.129374  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:39.158470  528268 cri.go:89] found id: ""
	I1206 10:39:39.158484  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.158491  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:39.158496  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:39.158555  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:39.184281  528268 cri.go:89] found id: ""
	I1206 10:39:39.184295  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.184303  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:39.184308  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:39.184371  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:39.213800  528268 cri.go:89] found id: ""
	I1206 10:39:39.213813  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.213820  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:39.213825  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:39.213879  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:39.239313  528268 cri.go:89] found id: ""
	I1206 10:39:39.239327  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.239334  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:39.239339  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:39.239399  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:39.266416  528268 cri.go:89] found id: ""
	I1206 10:39:39.266429  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.266436  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:39.266442  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:39.266497  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:39.291512  528268 cri.go:89] found id: ""
	I1206 10:39:39.291526  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.291533  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:39.291541  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:39.291552  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:39.357396  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:39.357414  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:39.372532  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:39.372549  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:39.435924  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:39.427398   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.428323   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.429997   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.430495   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.432094   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:39.427398   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.428323   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.429997   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.430495   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.432094   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:39.435935  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:39.435946  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:39.504162  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:39.504182  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:42.034738  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:42.045722  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:42.045786  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:42.075972  528268 cri.go:89] found id: ""
	I1206 10:39:42.075988  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.075998  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:42.076004  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:42.076071  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:42.111989  528268 cri.go:89] found id: ""
	I1206 10:39:42.112018  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.112042  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:42.112048  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:42.112124  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:42.147538  528268 cri.go:89] found id: ""
	I1206 10:39:42.147562  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.147571  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:42.147577  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:42.147654  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:42.177982  528268 cri.go:89] found id: ""
	I1206 10:39:42.177999  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.178009  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:42.178016  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:42.178090  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:42.209844  528268 cri.go:89] found id: ""
	I1206 10:39:42.209860  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.209868  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:42.209874  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:42.209966  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:42.266057  528268 cri.go:89] found id: ""
	I1206 10:39:42.266071  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.266079  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:42.266085  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:42.266153  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:42.298140  528268 cri.go:89] found id: ""
	I1206 10:39:42.298154  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.298162  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:42.298184  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:42.298197  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:42.330034  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:42.330051  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:42.396938  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:42.396958  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:42.412056  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:42.412077  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:42.481304  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:42.470939   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.471731   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.473286   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.475758   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.476402   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:42.470939   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.471731   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.473286   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.475758   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.476402   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:42.481314  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:42.481326  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:45.054765  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:45.080943  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:45.081023  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:45.141872  528268 cri.go:89] found id: ""
	I1206 10:39:45.141889  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.141898  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:45.141904  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:45.141970  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:45.187818  528268 cri.go:89] found id: ""
	I1206 10:39:45.187838  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.187846  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:45.187854  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:45.187928  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:45.231785  528268 cri.go:89] found id: ""
	I1206 10:39:45.231815  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.231846  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:45.231853  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:45.232001  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:45.271976  528268 cri.go:89] found id: ""
	I1206 10:39:45.272000  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.272007  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:45.272020  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:45.272144  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:45.309755  528268 cri.go:89] found id: ""
	I1206 10:39:45.309770  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.309778  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:45.309784  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:45.309859  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:45.337077  528268 cri.go:89] found id: ""
	I1206 10:39:45.337091  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.337098  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:45.337104  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:45.337161  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:45.363255  528268 cri.go:89] found id: ""
	I1206 10:39:45.363269  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.363277  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:45.363285  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:45.363295  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:45.430326  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:45.430345  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:45.445222  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:45.445239  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:45.514305  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:45.503694   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.504527   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.507399   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.508008   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.509816   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:45.503694   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.504527   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.507399   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.508008   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.509816   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:45.514315  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:45.514351  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:45.586673  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:45.586702  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:48.117880  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:48.128191  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:48.128261  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:48.153898  528268 cri.go:89] found id: ""
	I1206 10:39:48.153912  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.153919  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:48.153924  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:48.153986  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:48.179947  528268 cri.go:89] found id: ""
	I1206 10:39:48.179960  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.179968  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:48.179973  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:48.180032  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:48.206970  528268 cri.go:89] found id: ""
	I1206 10:39:48.206984  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.206992  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:48.206997  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:48.207056  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:48.232490  528268 cri.go:89] found id: ""
	I1206 10:39:48.232504  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.232511  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:48.232516  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:48.232574  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:48.261888  528268 cri.go:89] found id: ""
	I1206 10:39:48.261902  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.261909  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:48.261915  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:48.261970  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:48.287239  528268 cri.go:89] found id: ""
	I1206 10:39:48.287259  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.287266  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:48.287271  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:48.287327  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:48.312701  528268 cri.go:89] found id: ""
	I1206 10:39:48.312716  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.312723  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:48.312730  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:48.312741  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:48.379854  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:48.379873  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:48.395027  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:48.395043  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:48.467966  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:48.459014   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.459732   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.460649   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.462199   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.462576   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:48.459014   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.459732   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.460649   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.462199   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.462576   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:48.467977  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:48.467999  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:48.537326  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:48.537347  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:51.077353  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:51.088357  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:51.088422  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:51.113964  528268 cri.go:89] found id: ""
	I1206 10:39:51.113978  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.113986  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:51.113991  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:51.114048  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:51.141966  528268 cri.go:89] found id: ""
	I1206 10:39:51.141981  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.141989  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:51.141994  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:51.142065  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:51.170585  528268 cri.go:89] found id: ""
	I1206 10:39:51.170599  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.170607  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:51.170612  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:51.170670  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:51.196958  528268 cri.go:89] found id: ""
	I1206 10:39:51.196972  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.196980  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:51.196985  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:51.197045  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:51.222240  528268 cri.go:89] found id: ""
	I1206 10:39:51.222255  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.222262  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:51.222267  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:51.222328  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:51.248023  528268 cri.go:89] found id: ""
	I1206 10:39:51.248038  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.248045  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:51.248051  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:51.248110  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:51.276094  528268 cri.go:89] found id: ""
	I1206 10:39:51.276108  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.276115  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:51.276122  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:51.276132  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:51.342420  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:51.342443  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:51.357018  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:51.357034  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:51.423986  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:51.415814   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.416564   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.418096   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.418402   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.419900   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:51.415814   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.416564   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.418096   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.418402   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.419900   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:51.423996  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:51.424007  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:51.493620  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:51.493640  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:54.023829  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:54.034889  528268 kubeadm.go:602] duration metric: took 4m2.326619845s to restartPrimaryControlPlane
	W1206 10:39:54.034955  528268 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 10:39:54.035078  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:39:54.453084  528268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:39:54.466906  528268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:39:54.474624  528268 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:39:54.474678  528268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:39:54.482552  528268 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:39:54.482562  528268 kubeadm.go:158] found existing configuration files:
	
	I1206 10:39:54.482612  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:39:54.490238  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:39:54.490301  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:39:54.497760  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:39:54.505776  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:39:54.505840  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:39:54.513397  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:39:54.521456  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:39:54.521517  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:39:54.529274  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:39:54.537105  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:39:54.537161  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:39:54.544719  528268 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:39:54.584997  528268 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:39:54.585045  528268 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:39:54.652750  528268 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:39:54.652815  528268 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:39:54.652850  528268 kubeadm.go:319] OS: Linux
	I1206 10:39:54.652893  528268 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:39:54.652940  528268 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:39:54.652986  528268 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:39:54.653033  528268 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:39:54.653079  528268 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:39:54.653126  528268 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:39:54.653171  528268 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:39:54.653217  528268 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:39:54.653262  528268 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:39:54.728791  528268 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:39:54.728901  528268 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:39:54.729018  528268 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:39:54.737647  528268 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:39:54.741159  528268 out.go:252]   - Generating certificates and keys ...
	I1206 10:39:54.741265  528268 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:39:54.741337  528268 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:39:54.741433  528268 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:39:54.741505  528268 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:39:54.741585  528268 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:39:54.741651  528268 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:39:54.741743  528268 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:39:54.741813  528268 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:39:54.741895  528268 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:39:54.741991  528268 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:39:54.742045  528268 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:39:54.742113  528268 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:39:55.375743  528268 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:39:55.444664  528268 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:39:55.561708  528268 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:39:55.802678  528268 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:39:55.992428  528268 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:39:55.993134  528268 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:39:55.995941  528268 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:39:55.999335  528268 out.go:252]   - Booting up control plane ...
	I1206 10:39:55.999434  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:39:55.999507  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:39:55.999569  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:39:56.016567  528268 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:39:56.016688  528268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:39:56.025029  528268 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:39:56.025345  528268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:39:56.025411  528268 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:39:56.167783  528268 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:39:56.167896  528268 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:43:56.165890  528268 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000163749s
	I1206 10:43:56.165916  528268 kubeadm.go:319] 
	I1206 10:43:56.165973  528268 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:43:56.166007  528268 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:43:56.166124  528268 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:43:56.166130  528268 kubeadm.go:319] 
	I1206 10:43:56.166237  528268 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:43:56.166298  528268 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:43:56.166345  528268 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:43:56.166349  528268 kubeadm.go:319] 
	I1206 10:43:56.171451  528268 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:43:56.171899  528268 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:43:56.172014  528268 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:43:56.172288  528268 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1206 10:43:56.172293  528268 kubeadm.go:319] 
	I1206 10:43:56.172374  528268 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:43:56.172501  528268 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000163749s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:43:56.172597  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:43:56.619462  528268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:43:56.633229  528268 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:43:56.633287  528268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:43:56.641609  528268 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:43:56.641619  528268 kubeadm.go:158] found existing configuration files:
	
	I1206 10:43:56.641669  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:43:56.649494  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:43:56.649548  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:43:56.657009  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:43:56.665153  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:43:56.665204  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:43:56.672965  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:43:56.681003  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:43:56.681063  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:43:56.688721  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:43:56.696901  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:43:56.696963  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:43:56.704620  528268 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:43:56.745749  528268 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:43:56.745826  528268 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:43:56.814552  528268 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:43:56.814625  528268 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:43:56.814668  528268 kubeadm.go:319] OS: Linux
	I1206 10:43:56.814710  528268 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:43:56.814764  528268 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:43:56.814817  528268 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:43:56.814861  528268 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:43:56.814913  528268 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:43:56.814977  528268 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:43:56.815030  528268 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:43:56.815078  528268 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:43:56.815150  528268 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:43:56.882919  528268 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:43:56.883028  528268 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:43:56.883177  528268 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:43:56.891776  528268 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:43:56.897133  528268 out.go:252]   - Generating certificates and keys ...
	I1206 10:43:56.897243  528268 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:43:56.897331  528268 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:43:56.897418  528268 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:43:56.897483  528268 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:43:56.897556  528268 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:43:56.897613  528268 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:43:56.897679  528268 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:43:56.897743  528268 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:43:56.897822  528268 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:43:56.897898  528268 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:43:56.897938  528268 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:43:56.897997  528268 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:43:57.103756  528268 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:43:57.598666  528268 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:43:58.161834  528268 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:43:58.402161  528268 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:43:58.630471  528268 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:43:58.631113  528268 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:43:58.634023  528268 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:43:58.637198  528268 out.go:252]   - Booting up control plane ...
	I1206 10:43:58.637294  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:43:58.637640  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:43:58.639086  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:43:58.654264  528268 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:43:58.654366  528268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:43:58.662722  528268 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:43:58.663439  528268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:43:58.663774  528268 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:43:58.799365  528268 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:43:58.799473  528268 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:47:58.799403  528268 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000249913s
	I1206 10:47:58.799433  528268 kubeadm.go:319] 
	I1206 10:47:58.799491  528268 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:47:58.799521  528268 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:47:58.799619  528268 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:47:58.799623  528268 kubeadm.go:319] 
	I1206 10:47:58.799720  528268 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:47:58.799749  528268 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:47:58.799777  528268 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:47:58.799780  528268 kubeadm.go:319] 
	I1206 10:47:58.803822  528268 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:47:58.804249  528268 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:47:58.804357  528268 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:47:58.804590  528268 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:47:58.804595  528268 kubeadm.go:319] 
	I1206 10:47:58.804663  528268 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:47:58.804715  528268 kubeadm.go:403] duration metric: took 12m7.139257328s to StartCluster
	I1206 10:47:58.804746  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:47:58.804808  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:47:58.833842  528268 cri.go:89] found id: ""
	I1206 10:47:58.833855  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.833863  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:47:58.833869  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:47:58.833925  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:47:58.859642  528268 cri.go:89] found id: ""
	I1206 10:47:58.859656  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.859663  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:47:58.859668  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:47:58.859731  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:47:58.888835  528268 cri.go:89] found id: ""
	I1206 10:47:58.888850  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.888857  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:47:58.888863  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:47:58.888920  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:47:58.913692  528268 cri.go:89] found id: ""
	I1206 10:47:58.913706  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.913713  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:47:58.913718  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:47:58.913775  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:47:58.941639  528268 cri.go:89] found id: ""
	I1206 10:47:58.941653  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.941660  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:47:58.941671  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:47:58.941728  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:47:58.968219  528268 cri.go:89] found id: ""
	I1206 10:47:58.968240  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.968249  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:47:58.968254  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:47:58.968312  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:47:58.993376  528268 cri.go:89] found id: ""
	I1206 10:47:58.993390  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.993397  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:47:58.993405  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:47:58.993415  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:47:59.059491  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:47:59.059510  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:47:59.075692  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:47:59.075708  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:47:59.140902  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:47:59.133228   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.133791   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.135323   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.135733   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.137154   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:47:59.133228   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.133791   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.135323   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.135733   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.137154   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:47:59.140911  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:47:59.140922  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:47:59.218521  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:47:59.218539  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:47:59.255468  528268 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000249913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:47:59.255514  528268 out.go:285] * 
	W1206 10:47:59.255766  528268 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000249913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:47:59.255841  528268 out.go:285] * 
	W1206 10:47:59.258456  528268 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:47:59.265427  528268 out.go:203] 
	W1206 10:47:59.268413  528268 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000249913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:47:59.268473  528268 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:47:59.268491  528268 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:47:59.271584  528268 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.886838849Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=e2aa5af4-3e0c-4a29-a9b0-9e59e8da3ea3 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.888149098Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=2232845f-2ab4-48d6-ac34-944fdebda910 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.888749905Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=c67da188-42dd-470b-ae77-cf546f5b22af name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.889342319Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=7b189f38-b046-468f-93d2-aafc2f683ea0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.889870274Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=cce0b971-d053-408a-aced-c9bdb56d4198 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.890356696Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=2133806a-9696-4cef-a9b9-9f8ae49bcb1a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.890769463Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=4197f4de-a4d5-47d7-aee8-909523db8ff4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.510413066Z" level=info msg="Checking image status: kicbase/echo-server:functional-123579" id=03972bc3-b343-408f-b3f2-79f8c749bdd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.510587528Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.510631539Z" level=info msg="Image kicbase/echo-server:functional-123579 not found" id=03972bc3-b343-408f-b3f2-79f8c749bdd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.510692789Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-123579 found" id=03972bc3-b343-408f-b3f2-79f8c749bdd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.542613043Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-123579" id=58dbc605-d105-4be4-b25a-21c2b48f56f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.54278168Z" level=info msg="Image docker.io/kicbase/echo-server:functional-123579 not found" id=58dbc605-d105-4be4-b25a-21c2b48f56f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.542832714Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-123579 found" id=58dbc605-d105-4be4-b25a-21c2b48f56f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.568965528Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-123579" id=0d06a5de-c1f5-4ecd-8470-3e3f2af12cd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.569093041Z" level=info msg="Image localhost/kicbase/echo-server:functional-123579 not found" id=0d06a5de-c1f5-4ecd-8470-3e3f2af12cd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.569130307Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-123579 found" id=0d06a5de-c1f5-4ecd-8470-3e3f2af12cd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.415971983Z" level=info msg="Checking image status: kicbase/echo-server:functional-123579" id=d02ceb5e-e1d4-444e-b5cf-afd7146cf8a4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.416234295Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.416285124Z" level=info msg="Image kicbase/echo-server:functional-123579 not found" id=d02ceb5e-e1d4-444e-b5cf-afd7146cf8a4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.416360913Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-123579 found" id=d02ceb5e-e1d4-444e-b5cf-afd7146cf8a4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.443629234Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-123579" id=cdf51062-f60d-426d-8465-769b2314eeb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.443787499Z" level=info msg="Image docker.io/kicbase/echo-server:functional-123579 not found" id=cdf51062-f60d-426d-8465-769b2314eeb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.443828999Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-123579 found" id=cdf51062-f60d-426d-8465-769b2314eeb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.48107794Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-123579" id=b88f3676-3120-4861-8534-602a63bfd49e name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:50:03.675440   23329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:50:03.677115   23329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:50:03.677959   23329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:50:03.679688   23329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:50:03.680001   23329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:20] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000983] FS-Cache: O-cookie d=000000005fa08aa9{9P.session} n=00000000effdd306
	[  +0.001108] FS-Cache: O-key=[10] '34323935383339353739'
	[  +0.000774] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001064] FS-Cache: N-cookie d=000000005fa08aa9{9P.session} n=00000000d1a54e80
	[  +0.001158] FS-Cache: N-key=[10] '34323935383339353739'
	[Dec 6 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.091742] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:17] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:18] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:50:03 up  3:32,  0 user,  load average: 0.34, 0.31, 0.47
	Linux functional-123579 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:50:01 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:50:02 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2292.
	Dec 06 10:50:02 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:50:02 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:50:02 functional-123579 kubelet[23225]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:50:02 functional-123579 kubelet[23225]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:50:02 functional-123579 kubelet[23225]: E1206 10:50:02.243544   23225 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:50:02 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:50:02 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:50:02 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2293.
	Dec 06 10:50:02 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:50:02 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:50:02 functional-123579 kubelet[23246]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:50:02 functional-123579 kubelet[23246]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:50:02 functional-123579 kubelet[23246]: E1206 10:50:02.997269   23246 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:50:03 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:50:03 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:50:03 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2294.
	Dec 06 10:50:03 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:50:03 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:50:03 functional-123579 kubelet[23333]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:50:03 functional-123579 kubelet[23333]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:50:03 functional-123579 kubelet[23333]: E1206 10:50:03.738902   23333 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:50:03 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:50:03 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579: exit status 2 (376.253449ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-123579" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 10:48:25.786943  488068 retry.go:31] will retry after 3.046145531s: Temporary Error: Get "http://10.104.109.146": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1206 10:48:35.937148  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 10:48:38.834400  488068 retry.go:31] will retry after 3.632517801s: Temporary Error: Get "http://10.104.109.146": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 10:48:52.469080  488068 retry.go:31] will retry after 6.014861631s: Temporary Error: Get "http://10.104.109.146": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 10:49:08.484974  488068 retry.go:31] will retry after 11.241887448s: Temporary Error: Get "http://10.104.109.146": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 10:49:29.727857  488068 retry.go:31] will retry after 22.171014638s: Temporary Error: Get "http://10.104.109.146": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1206 10:51:39.006714  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579: exit status 2 (326.529804ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-123579" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-123579
helpers_test.go:243: (dbg) docker inspect functional-123579:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	        "Created": "2025-12-06T10:21:05.490589445Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 516908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:21:05.573219423Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hostname",
	        "HostsPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hosts",
	        "LogPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721-json.log",
	        "Name": "/functional-123579",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-123579:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-123579",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	                "LowerDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f-init/diff:/var/lib/docker/overlay2/cc06c0f1f442a7275dc247974ca9074508813cfb842de89bc5bb1dae1e824222/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-123579",
	                "Source": "/var/lib/docker/volumes/functional-123579/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-123579",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-123579",
	                "name.minikube.sigs.k8s.io": "functional-123579",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10921d51d4ec866d78853297249318b04ef864639c8e07349985c5733ba03a26",
	            "SandboxKey": "/var/run/docker/netns/10921d51d4ec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-123579": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:5b:29:c4:a4:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa75a7cb7ddfb7086d66f629904d681a84e2c9da78725396c4dc859cfc5aa536",
	                    "EndpointID": "eff9632b5a6c335169f4a61b3c9f1727c30b30183ac61ac9730ddb7b0d19cf24",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-123579",
	                        "86e8d3865f80"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579: exit status 2 (325.114221ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                     ARGS                                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-123579 ssh findmnt -T /mount-9p | grep 9p                                                                                          │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ ssh            │ functional-123579 ssh -- ls -la /mount-9p                                                                                                     │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ ssh            │ functional-123579 ssh sudo umount -f /mount-9p                                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ mount          │ -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1916336964/001:/mount1 --alsologtostderr -v=1          │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ mount          │ -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1916336964/001:/mount2 --alsologtostderr -v=1          │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ mount          │ -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1916336964/001:/mount3 --alsologtostderr -v=1          │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ ssh            │ functional-123579 ssh findmnt -T /mount1                                                                                                      │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ ssh            │ functional-123579 ssh findmnt -T /mount1                                                                                                      │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ ssh            │ functional-123579 ssh findmnt -T /mount2                                                                                                      │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ ssh            │ functional-123579 ssh findmnt -T /mount3                                                                                                      │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ mount          │ -p functional-123579 --kill=true                                                                                                              │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ start          │ -p functional-123579 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ start          │ -p functional-123579 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0           │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ start          │ -p functional-123579 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-123579 --alsologtostderr -v=1                                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ update-context │ functional-123579 update-context --alsologtostderr -v=2                                                                                       │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ update-context │ functional-123579 update-context --alsologtostderr -v=2                                                                                       │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ update-context │ functional-123579 update-context --alsologtostderr -v=2                                                                                       │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ image          │ functional-123579 image ls --format short --alsologtostderr                                                                                   │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ image          │ functional-123579 image ls --format yaml --alsologtostderr                                                                                    │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ ssh            │ functional-123579 ssh pgrep buildkitd                                                                                                         │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │                     │
	│ image          │ functional-123579 image build -t localhost/my-image:functional-123579 testdata/build --alsologtostderr                                        │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ image          │ functional-123579 image ls                                                                                                                    │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ image          │ functional-123579 image ls --format json --alsologtostderr                                                                                    │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	│ image          │ functional-123579 image ls --format table --alsologtostderr                                                                                   │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:50 UTC │ 06 Dec 25 10:50 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:50:16
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:50:16.228088  547112 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:50:16.228226  547112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:50:16.228235  547112 out.go:374] Setting ErrFile to fd 2...
	I1206 10:50:16.228242  547112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:50:16.228610  547112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:50:16.229021  547112 out.go:368] Setting JSON to false
	I1206 10:50:16.229905  547112 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":12768,"bootTime":1765005449,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:50:16.229980  547112 start.go:143] virtualization:  
	I1206 10:50:16.233324  547112 out.go:179] * [functional-123579] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1206 10:50:16.236330  547112 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:50:16.236404  547112 notify.go:221] Checking for updates...
	I1206 10:50:16.242247  547112 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:50:16.245134  547112 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:50:16.248006  547112 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:50:16.250882  547112 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:50:16.253750  547112 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:50:16.256997  547112 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:50:16.257560  547112 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:50:16.278739  547112 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:50:16.278856  547112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:50:16.342153  547112 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:50:16.332904034 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:50:16.342267  547112 docker.go:319] overlay module found
	I1206 10:50:16.345362  547112 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 10:50:16.348240  547112 start.go:309] selected driver: docker
	I1206 10:50:16.348265  547112 start.go:927] validating driver "docker" against &{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:50:16.348367  547112 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:50:16.351933  547112 out.go:203] 
	W1206 10:50:16.354791  547112 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 10:50:16.357639  547112 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.886838849Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=e2aa5af4-3e0c-4a29-a9b0-9e59e8da3ea3 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.888149098Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=2232845f-2ab4-48d6-ac34-944fdebda910 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.888749905Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=c67da188-42dd-470b-ae77-cf546f5b22af name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.889342319Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=7b189f38-b046-468f-93d2-aafc2f683ea0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.889870274Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=cce0b971-d053-408a-aced-c9bdb56d4198 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.890356696Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=2133806a-9696-4cef-a9b9-9f8ae49bcb1a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.890769463Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=4197f4de-a4d5-47d7-aee8-909523db8ff4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.510413066Z" level=info msg="Checking image status: kicbase/echo-server:functional-123579" id=03972bc3-b343-408f-b3f2-79f8c749bdd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.510587528Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.510631539Z" level=info msg="Image kicbase/echo-server:functional-123579 not found" id=03972bc3-b343-408f-b3f2-79f8c749bdd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.510692789Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-123579 found" id=03972bc3-b343-408f-b3f2-79f8c749bdd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.542613043Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-123579" id=58dbc605-d105-4be4-b25a-21c2b48f56f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.54278168Z" level=info msg="Image docker.io/kicbase/echo-server:functional-123579 not found" id=58dbc605-d105-4be4-b25a-21c2b48f56f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.542832714Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-123579 found" id=58dbc605-d105-4be4-b25a-21c2b48f56f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.568965528Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-123579" id=0d06a5de-c1f5-4ecd-8470-3e3f2af12cd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.569093041Z" level=info msg="Image localhost/kicbase/echo-server:functional-123579 not found" id=0d06a5de-c1f5-4ecd-8470-3e3f2af12cd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.569130307Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-123579 found" id=0d06a5de-c1f5-4ecd-8470-3e3f2af12cd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.415971983Z" level=info msg="Checking image status: kicbase/echo-server:functional-123579" id=d02ceb5e-e1d4-444e-b5cf-afd7146cf8a4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.416234295Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.416285124Z" level=info msg="Image kicbase/echo-server:functional-123579 not found" id=d02ceb5e-e1d4-444e-b5cf-afd7146cf8a4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.416360913Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-123579 found" id=d02ceb5e-e1d4-444e-b5cf-afd7146cf8a4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.443629234Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-123579" id=cdf51062-f60d-426d-8465-769b2314eeb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.443787499Z" level=info msg="Image docker.io/kicbase/echo-server:functional-123579 not found" id=cdf51062-f60d-426d-8465-769b2314eeb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.443828999Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-123579 found" id=cdf51062-f60d-426d-8465-769b2314eeb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:12 functional-123579 crio[9949]: time="2025-12-06T10:48:12.48107794Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-123579" id=b88f3676-3120-4861-8534-602a63bfd49e name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:19.045455   25416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:19.046264   25416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:19.047847   25416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:19.048196   25416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:19.049663   25416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:20] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000983] FS-Cache: O-cookie d=000000005fa08aa9{9P.session} n=00000000effdd306
	[  +0.001108] FS-Cache: O-key=[10] '34323935383339353739'
	[  +0.000774] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001064] FS-Cache: N-cookie d=000000005fa08aa9{9P.session} n=00000000d1a54e80
	[  +0.001158] FS-Cache: N-key=[10] '34323935383339353739'
	[Dec 6 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.091742] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:17] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:18] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:52:19 up  3:34,  0 user,  load average: 0.11, 0.28, 0.44
	Linux functional-123579 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:52:16 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:52:17 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2472.
	Dec 06 10:52:17 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:52:17 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:52:17 functional-123579 kubelet[25289]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:52:17 functional-123579 kubelet[25289]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:52:17 functional-123579 kubelet[25289]: E1206 10:52:17.225927   25289 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:52:17 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:52:17 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:52:17 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2473.
	Dec 06 10:52:17 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:52:17 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:52:17 functional-123579 kubelet[25297]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:52:17 functional-123579 kubelet[25297]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:52:17 functional-123579 kubelet[25297]: E1206 10:52:17.983157   25297 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:52:17 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:52:17 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:52:18 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2474.
	Dec 06 10:52:18 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:52:18 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:52:18 functional-123579 kubelet[25338]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:52:18 functional-123579 kubelet[25338]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:52:18 functional-123579 kubelet[25338]: E1206 10:52:18.735383   25338 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:52:18 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:52:18 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579: exit status 2 (341.173212ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-123579" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (2.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-123579 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-123579 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (86.400306ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-123579 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-123579
helpers_test.go:243: (dbg) docker inspect functional-123579:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	        "Created": "2025-12-06T10:21:05.490589445Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 516908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:21:05.573219423Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hostname",
	        "HostsPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/hosts",
	        "LogPath": "/var/lib/docker/containers/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721/86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721-json.log",
	        "Name": "/functional-123579",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-123579:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-123579",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "86e8d3865f80be3254b92ab2fcb7b17081d8d21b435ec3b2a5b1aabe98529721",
	                "LowerDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f-init/diff:/var/lib/docker/overlay2/cc06c0f1f442a7275dc247974ca9074508813cfb842de89bc5bb1dae1e824222/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3e6bb48abda73afd2fdfa2ff2ca94aeb05fe3d4253ee29dbf14f4cc8fc9916f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-123579",
	                "Source": "/var/lib/docker/volumes/functional-123579/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-123579",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-123579",
	                "name.minikube.sigs.k8s.io": "functional-123579",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10921d51d4ec866d78853297249318b04ef864639c8e07349985c5733ba03a26",
	            "SandboxKey": "/var/run/docker/netns/10921d51d4ec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-123579": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:5b:29:c4:a4:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa75a7cb7ddfb7086d66f629904d681a84e2c9da78725396c4dc859cfc5aa536",
	                    "EndpointID": "eff9632b5a6c335169f4a61b3c9f1727c30b30183ac61ac9730ddb7b0d19cf24",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-123579",
	                        "86e8d3865f80"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-123579 -n functional-123579: exit status 2 (398.799751ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-123579 logs -n 25: (1.349329993s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ list                                                                                                                                                         │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ ssh     │ functional-123579 ssh sudo crictl images                                                                                                                     │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ ssh     │ functional-123579 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                           │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ ssh     │ functional-123579 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │                     │
	│ cache   │ functional-123579 cache reload                                                                                                                               │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ ssh     │ functional-123579 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ kubectl │ functional-123579 kubectl -- --context functional-123579 get pods                                                                                            │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │                     │
	│ start   │ -p functional-123579 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │                     │
	│ cp      │ functional-123579 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ config  │ functional-123579 config unset cpus                                                                                                                          │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ config  │ functional-123579 config get cpus                                                                                                                            │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ config  │ functional-123579 config set cpus 2                                                                                                                          │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ config  │ functional-123579 config unset cpus                                                                                                                          │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ ssh     │ functional-123579 ssh -n functional-123579 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ config  │ functional-123579 config get cpus                                                                                                                            │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ license │                                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ cp      │ functional-123579 cp functional-123579:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2311564542/001/cp-test.txt │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ ssh     │ functional-123579 ssh sudo systemctl is-active docker                                                                                                        │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ ssh     │ functional-123579 ssh -n functional-123579 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ ssh     │ functional-123579 ssh sudo systemctl is-active containerd                                                                                                    │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	│ cp      │ functional-123579 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ ssh     │ functional-123579 ssh -n functional-123579 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │ 06 Dec 25 10:48 UTC │
	│ image   │ functional-123579 image load --daemon kicbase/echo-server:functional-123579 --alsologtostderr                                                                │ functional-123579 │ jenkins │ v1.37.0 │ 06 Dec 25 10:48 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:35:46
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:35:46.955658  528268 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:35:46.955828  528268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:35:46.955833  528268 out.go:374] Setting ErrFile to fd 2...
	I1206 10:35:46.955837  528268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:35:46.956177  528268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:35:46.956655  528268 out.go:368] Setting JSON to false
	I1206 10:35:46.957664  528268 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11898,"bootTime":1765005449,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:35:46.957734  528268 start.go:143] virtualization:  
	I1206 10:35:46.961283  528268 out.go:179] * [functional-123579] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:35:46.964510  528268 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:35:46.964613  528268 notify.go:221] Checking for updates...
	I1206 10:35:46.968278  528268 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:35:46.971356  528268 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:35:46.974199  528268 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:35:46.977104  528268 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:35:46.980765  528268 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:35:46.984213  528268 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:35:46.984322  528268 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:35:47.012645  528268 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:35:47.012749  528268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:35:47.074577  528268 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:35:47.064697556 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:35:47.074671  528268 docker.go:319] overlay module found
	I1206 10:35:47.077640  528268 out.go:179] * Using the docker driver based on existing profile
	I1206 10:35:47.080521  528268 start.go:309] selected driver: docker
	I1206 10:35:47.080533  528268 start.go:927] validating driver "docker" against &{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:35:47.080637  528268 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:35:47.080758  528268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:35:47.138440  528268 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:35:47.128848609 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:35:47.138821  528268 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:35:47.138844  528268 cni.go:84] Creating CNI manager for ""
	I1206 10:35:47.138899  528268 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:35:47.138936  528268 start.go:353] cluster config:
	{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:35:47.144166  528268 out.go:179] * Starting "functional-123579" primary control-plane node in "functional-123579" cluster
	I1206 10:35:47.147068  528268 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:35:47.149949  528268 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:35:47.152780  528268 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:35:47.152816  528268 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:35:47.152824  528268 cache.go:65] Caching tarball of preloaded images
	I1206 10:35:47.152870  528268 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:35:47.152921  528268 preload.go:238] Found /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:35:47.152931  528268 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:35:47.153043  528268 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/config.json ...
	I1206 10:35:47.172511  528268 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:35:47.172523  528268 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:35:47.172545  528268 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:35:47.172580  528268 start.go:360] acquireMachinesLock for functional-123579: {Name:mk35a9adf20f50a3c49b774a4ee092917f16cc66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:35:47.172652  528268 start.go:364] duration metric: took 54.497µs to acquireMachinesLock for "functional-123579"
	I1206 10:35:47.172672  528268 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:35:47.172676  528268 fix.go:54] fixHost starting: 
	I1206 10:35:47.172937  528268 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
	I1206 10:35:47.189604  528268 fix.go:112] recreateIfNeeded on functional-123579: state=Running err=<nil>
	W1206 10:35:47.189624  528268 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:35:47.192615  528268 out.go:252] * Updating the running docker "functional-123579" container ...
	I1206 10:35:47.192637  528268 machine.go:94] provisionDockerMachine start ...
	I1206 10:35:47.192731  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:47.209670  528268 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:47.209990  528268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:35:47.209996  528268 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:35:47.362840  528268 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:35:47.362854  528268 ubuntu.go:182] provisioning hostname "functional-123579"
	I1206 10:35:47.362918  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:47.381544  528268 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:47.381860  528268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:35:47.381868  528268 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-123579 && echo "functional-123579" | sudo tee /etc/hostname
	I1206 10:35:47.544930  528268 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-123579
	
	I1206 10:35:47.545031  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:47.563487  528268 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:47.563810  528268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:35:47.563823  528268 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-123579' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-123579/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-123579' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:35:47.717170  528268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:35:47.717187  528268 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-484819/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-484819/.minikube}
	I1206 10:35:47.717204  528268 ubuntu.go:190] setting up certificates
	I1206 10:35:47.717211  528268 provision.go:84] configureAuth start
	I1206 10:35:47.717282  528268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:35:47.741856  528268 provision.go:143] copyHostCerts
	I1206 10:35:47.741924  528268 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem, removing ...
	I1206 10:35:47.741936  528268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 10:35:47.742009  528268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem (1082 bytes)
	I1206 10:35:47.742105  528268 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem, removing ...
	I1206 10:35:47.742109  528268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 10:35:47.742132  528268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem (1123 bytes)
	I1206 10:35:47.742180  528268 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem, removing ...
	I1206 10:35:47.742184  528268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 10:35:47.742206  528268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem (1675 bytes)
	I1206 10:35:47.742252  528268 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem org=jenkins.functional-123579 san=[127.0.0.1 192.168.49.2 functional-123579 localhost minikube]
	I1206 10:35:47.924439  528268 provision.go:177] copyRemoteCerts
	I1206 10:35:47.924500  528268 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:35:47.924538  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:47.942367  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:48.047397  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:35:48.065928  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:35:48.085149  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 10:35:48.103937  528268 provision.go:87] duration metric: took 386.701009ms to configureAuth
	I1206 10:35:48.103956  528268 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:35:48.104161  528268 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:35:48.104265  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.122386  528268 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:48.122699  528268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1206 10:35:48.122711  528268 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:35:48.484149  528268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:35:48.484161  528268 machine.go:97] duration metric: took 1.291517603s to provisionDockerMachine
	I1206 10:35:48.484171  528268 start.go:293] postStartSetup for "functional-123579" (driver="docker")
	I1206 10:35:48.484183  528268 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:35:48.484243  528268 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:35:48.484311  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.507680  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:48.615171  528268 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:35:48.618416  528268 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:35:48.618434  528268 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:35:48.618444  528268 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/addons for local assets ...
	I1206 10:35:48.618496  528268 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/files for local assets ...
	I1206 10:35:48.618569  528268 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> 4880682.pem in /etc/ssl/certs
	I1206 10:35:48.618650  528268 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts -> hosts in /etc/test/nested/copy/488068
	I1206 10:35:48.618693  528268 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488068
	I1206 10:35:48.626464  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:35:48.643882  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts --> /etc/test/nested/copy/488068/hosts (40 bytes)
	I1206 10:35:48.662582  528268 start.go:296] duration metric: took 178.395271ms for postStartSetup
	I1206 10:35:48.662675  528268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:35:48.662713  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.680751  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:48.784322  528268 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:35:48.789238  528268 fix.go:56] duration metric: took 1.616554387s for fixHost
	I1206 10:35:48.789253  528268 start.go:83] releasing machines lock for "functional-123579", held for 1.616594099s
	I1206 10:35:48.789324  528268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-123579
	I1206 10:35:48.807477  528268 ssh_runner.go:195] Run: cat /version.json
	I1206 10:35:48.807520  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.807562  528268 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:35:48.807618  528268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
	I1206 10:35:48.828942  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:48.845083  528268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
	I1206 10:35:49.020126  528268 ssh_runner.go:195] Run: systemctl --version
	I1206 10:35:49.026608  528268 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:35:49.065500  528268 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:35:49.069961  528268 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:35:49.070024  528268 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:35:49.077978  528268 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:35:49.077992  528268 start.go:496] detecting cgroup driver to use...
	I1206 10:35:49.078033  528268 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:35:49.078078  528268 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:35:49.093402  528268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:35:49.106707  528268 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:35:49.106771  528268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:35:49.122603  528268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:35:49.135424  528268 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:35:49.251969  528268 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:35:49.384025  528268 docker.go:234] disabling docker service ...
	I1206 10:35:49.384082  528268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:35:49.398904  528268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:35:49.412283  528268 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:35:49.535452  528268 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:35:49.651851  528268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:35:49.665735  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:35:49.680503  528268 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:35:49.680561  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.689947  528268 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:35:49.690006  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.699358  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.708725  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.718744  528268 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:35:49.727534  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.737013  528268 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.745582  528268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:49.754308  528268 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:35:49.762144  528268 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:35:49.769875  528268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:35:49.884338  528268 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:35:50.052236  528268 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:35:50.052348  528268 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:35:50.057582  528268 start.go:564] Will wait 60s for crictl version
	I1206 10:35:50.057651  528268 ssh_runner.go:195] Run: which crictl
	I1206 10:35:50.062638  528268 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:35:50.100652  528268 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:35:50.100743  528268 ssh_runner.go:195] Run: crio --version
	I1206 10:35:50.139579  528268 ssh_runner.go:195] Run: crio --version
	I1206 10:35:50.174800  528268 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:35:50.177732  528268 cli_runner.go:164] Run: docker network inspect functional-123579 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:35:50.194850  528268 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:35:50.201950  528268 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 10:35:50.204938  528268 kubeadm.go:884] updating cluster {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:35:50.205078  528268 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:35:50.205145  528268 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:35:50.240680  528268 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:35:50.240692  528268 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:35:50.240750  528268 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:35:50.267939  528268 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:35:50.267955  528268 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:35:50.267962  528268 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:35:50.268053  528268 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-123579 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:35:50.268129  528268 ssh_runner.go:195] Run: crio config
	I1206 10:35:50.326220  528268 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 10:35:50.326240  528268 cni.go:84] Creating CNI manager for ""
	I1206 10:35:50.326248  528268 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:35:50.326256  528268 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:35:50.326280  528268 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-123579 NodeName:functional-123579 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:35:50.326407  528268 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-123579"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:35:50.326477  528268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:35:50.334319  528268 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:35:50.334378  528268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:35:50.341826  528268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:35:50.354245  528268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:35:50.367015  528268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1206 10:35:50.379350  528268 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:35:50.382958  528268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:35:50.504018  528268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:35:50.930865  528268 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579 for IP: 192.168.49.2
	I1206 10:35:50.930875  528268 certs.go:195] generating shared ca certs ...
	I1206 10:35:50.930889  528268 certs.go:227] acquiring lock for ca certs: {Name:mk654f77abd8383620ce6ddae56f2a6a8c1d96d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:35:50.931046  528268 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key
	I1206 10:35:50.931093  528268 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key
	I1206 10:35:50.931099  528268 certs.go:257] generating profile certs ...
	I1206 10:35:50.931220  528268 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.key
	I1206 10:35:50.931274  528268 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key.fda7c087
	I1206 10:35:50.931318  528268 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key
	I1206 10:35:50.931430  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem (1338 bytes)
	W1206 10:35:50.931460  528268 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068_empty.pem, impossibly tiny 0 bytes
	I1206 10:35:50.931466  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 10:35:50.931493  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:35:50.931515  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:35:50.931536  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem (1675 bytes)
	I1206 10:35:50.931577  528268 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 10:35:50.932148  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:35:50.953643  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:35:50.975543  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:35:50.998708  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 10:35:51.019841  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:35:51.038179  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:35:51.055740  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:35:51.075573  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:35:51.094756  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /usr/share/ca-certificates/4880682.pem (1708 bytes)
	I1206 10:35:51.113922  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:35:51.132368  528268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem --> /usr/share/ca-certificates/488068.pem (1338 bytes)
	I1206 10:35:51.150650  528268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:35:51.163984  528268 ssh_runner.go:195] Run: openssl version
	I1206 10:35:51.171418  528268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4880682.pem
	I1206 10:35:51.179298  528268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4880682.pem /etc/ssl/certs/4880682.pem
	I1206 10:35:51.187013  528268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4880682.pem
	I1206 10:35:51.190756  528268 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 10:35:51.190814  528268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4880682.pem
	I1206 10:35:51.231889  528268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:35:51.239348  528268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:51.246609  528268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:35:51.254276  528268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:51.258574  528268 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:51.258631  528268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:51.301011  528268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:35:51.308790  528268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488068.pem
	I1206 10:35:51.316400  528268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488068.pem /etc/ssl/certs/488068.pem
	I1206 10:35:51.324195  528268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488068.pem
	I1206 10:35:51.328353  528268 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 10:35:51.328409  528268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488068.pem
	I1206 10:35:51.371753  528268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:35:51.379339  528268 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:35:51.383319  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:35:51.424469  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:35:51.465529  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:35:51.511345  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:35:51.565170  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:35:51.614532  528268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:35:51.665468  528268 kubeadm.go:401] StartCluster: {Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:35:51.665553  528268 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:35:51.665612  528268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:35:51.699589  528268 cri.go:89] found id: ""
	I1206 10:35:51.699652  528268 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:35:51.708250  528268 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:35:51.708260  528268 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:35:51.708318  528268 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:35:51.716593  528268 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:35:51.717135  528268 kubeconfig.go:125] found "functional-123579" server: "https://192.168.49.2:8441"
	I1206 10:35:51.718506  528268 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:35:51.728290  528268 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 10:21:13.758601441 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 10:35:50.371679399 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1206 10:35:51.728307  528268 kubeadm.go:1161] stopping kube-system containers ...
	I1206 10:35:51.728319  528268 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 10:35:51.728381  528268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:35:51.763757  528268 cri.go:89] found id: ""
	I1206 10:35:51.763820  528268 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 10:35:51.777420  528268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:35:51.785097  528268 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  6 10:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  6 10:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  6 10:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  6 10:25 /etc/kubernetes/scheduler.conf
	
	I1206 10:35:51.785162  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:35:51.792642  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:35:51.800316  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:35:51.800387  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:35:51.808313  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:35:51.815662  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:35:51.815715  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:35:51.823153  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:35:51.831093  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:35:51.831167  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:35:51.838577  528268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:35:51.846346  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:51.894809  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:52.979571  528268 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.084737023s)
	I1206 10:35:52.979630  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:53.188528  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:53.255794  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:35:53.309672  528268 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:35:53.309740  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:53.810758  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:54.309899  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:54.810832  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:55.309958  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:55.809819  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:56.310103  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:56.809902  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:57.309923  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:57.809975  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:58.310731  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:58.809924  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:59.310585  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:35:59.810731  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:00.309923  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:00.810538  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:01.310473  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:01.810374  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:02.310412  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:02.809925  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:03.309918  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:03.810667  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:04.310497  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:04.810559  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:05.310616  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:05.810787  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:06.310760  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:06.810542  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:07.310481  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:07.810515  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:08.310271  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:08.810300  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:09.309935  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:09.809899  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:10.310756  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:10.809928  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:11.309919  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:11.809916  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:12.310322  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:12.809962  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:13.309904  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:13.809901  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:14.309825  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:14.809939  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:15.309858  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:15.810769  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:16.310915  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:16.809905  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:17.310298  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:17.809935  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:18.310774  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:18.810876  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:19.310588  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:19.810539  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:20.309961  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:20.810313  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:21.310718  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:21.810176  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:22.310761  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:22.809819  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:23.310605  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:23.810607  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:24.310709  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:24.810672  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:25.309883  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:25.810296  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:26.309901  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:26.810157  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:27.310838  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:27.810698  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:28.309956  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:28.809934  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:29.310713  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:29.810598  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:30.310564  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:30.809937  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:31.309915  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:31.810618  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:32.310478  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:32.809942  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:33.310175  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:33.810817  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:34.310221  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:34.810764  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:35.309907  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:35.810700  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:36.310275  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:36.810581  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:37.310397  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:37.809951  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:38.310518  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:38.810174  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:39.310213  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:39.810271  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:40.309911  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:40.810748  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:41.310557  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:41.810632  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:42.309870  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:42.810506  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:43.309942  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:43.810676  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:44.310713  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:44.810703  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:45.310440  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:45.810823  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:46.309845  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:46.810726  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:47.310769  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:47.809917  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:48.310694  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:48.810273  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:49.310273  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:49.810301  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:50.309899  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:50.809907  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:51.309963  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:51.810551  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:52.310532  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:52.810599  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:53.310630  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:36:53.310706  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:36:53.342266  528268 cri.go:89] found id: ""
	I1206 10:36:53.342280  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.342287  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:36:53.342292  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:36:53.342356  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:36:53.368755  528268 cri.go:89] found id: ""
	I1206 10:36:53.368774  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.368781  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:36:53.368785  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:36:53.368846  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:36:53.393431  528268 cri.go:89] found id: ""
	I1206 10:36:53.393447  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.393454  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:36:53.393459  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:36:53.393515  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:36:53.418954  528268 cri.go:89] found id: ""
	I1206 10:36:53.418967  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.418974  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:36:53.418979  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:36:53.419036  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:36:53.444726  528268 cri.go:89] found id: ""
	I1206 10:36:53.444740  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.444747  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:36:53.444752  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:36:53.444809  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:36:53.469041  528268 cri.go:89] found id: ""
	I1206 10:36:53.469054  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.469062  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:36:53.469067  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:36:53.469122  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:36:53.494455  528268 cri.go:89] found id: ""
	I1206 10:36:53.494468  528268 logs.go:282] 0 containers: []
	W1206 10:36:53.494475  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:36:53.494483  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:36:53.494496  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:36:53.557127  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:36:53.549369   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.549959   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.551594   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.551939   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.553382   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:36:53.549369   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.549959   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.551594   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.551939   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:53.553382   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:36:53.557137  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:36:53.557148  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:36:53.629870  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:36:53.629900  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:36:53.661451  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:36:53.661466  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:36:53.730909  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:36:53.730927  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:36:56.247245  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:56.257306  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:36:56.257364  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:36:56.286141  528268 cri.go:89] found id: ""
	I1206 10:36:56.286155  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.286163  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:36:56.286168  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:36:56.286228  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:36:56.313467  528268 cri.go:89] found id: ""
	I1206 10:36:56.313481  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.313488  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:36:56.313499  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:36:56.313559  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:36:56.340777  528268 cri.go:89] found id: ""
	I1206 10:36:56.340791  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.340798  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:36:56.340803  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:36:56.340862  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:36:56.367085  528268 cri.go:89] found id: ""
	I1206 10:36:56.367099  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.367106  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:36:56.367111  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:36:56.367188  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:36:56.392392  528268 cri.go:89] found id: ""
	I1206 10:36:56.392407  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.392414  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:36:56.392420  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:36:56.392482  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:36:56.417786  528268 cri.go:89] found id: ""
	I1206 10:36:56.417799  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.417807  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:36:56.417812  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:36:56.417871  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:36:56.443872  528268 cri.go:89] found id: ""
	I1206 10:36:56.443886  528268 logs.go:282] 0 containers: []
	W1206 10:36:56.443893  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:36:56.443901  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:36:56.443911  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:36:56.509704  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:36:56.509723  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:36:56.524726  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:36:56.524742  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:36:56.590779  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:36:56.582349   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.583075   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.584764   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.585326   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.586966   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:36:56.582349   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.583075   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.584764   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.585326   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:56.586966   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:36:56.590789  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:36:56.590799  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:36:56.657863  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:36:56.657883  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:36:59.188879  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:36:59.199665  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:36:59.199726  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:36:59.232126  528268 cri.go:89] found id: ""
	I1206 10:36:59.232140  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.232148  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:36:59.232153  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:36:59.232212  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:36:59.257550  528268 cri.go:89] found id: ""
	I1206 10:36:59.257564  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.257571  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:36:59.257576  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:36:59.257633  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:36:59.282608  528268 cri.go:89] found id: ""
	I1206 10:36:59.282623  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.282630  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:36:59.282636  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:36:59.282698  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:36:59.312791  528268 cri.go:89] found id: ""
	I1206 10:36:59.312806  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.312813  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:36:59.312819  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:36:59.312881  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:36:59.339361  528268 cri.go:89] found id: ""
	I1206 10:36:59.339376  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.339383  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:36:59.339388  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:36:59.339447  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:36:59.366255  528268 cri.go:89] found id: ""
	I1206 10:36:59.366269  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.366276  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:36:59.366281  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:36:59.366339  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:36:59.394131  528268 cri.go:89] found id: ""
	I1206 10:36:59.394145  528268 logs.go:282] 0 containers: []
	W1206 10:36:59.394152  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:36:59.394172  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:36:59.394182  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:36:59.462514  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:36:59.462536  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:36:59.491731  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:36:59.491747  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:36:59.562406  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:36:59.562426  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:36:59.577286  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:36:59.577302  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:36:59.642145  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:36:59.633850   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.634393   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.636035   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.636643   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.638279   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:36:59.633850   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.634393   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.636035   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.636643   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:59.638279   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:02.143135  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:02.153343  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:02.153402  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:02.182430  528268 cri.go:89] found id: ""
	I1206 10:37:02.182453  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.182460  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:02.182466  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:02.182529  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:02.217140  528268 cri.go:89] found id: ""
	I1206 10:37:02.217164  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.217171  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:02.217176  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:02.217241  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:02.264761  528268 cri.go:89] found id: ""
	I1206 10:37:02.264775  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.264795  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:02.264800  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:02.264857  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:02.295104  528268 cri.go:89] found id: ""
	I1206 10:37:02.295118  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.295161  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:02.295166  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:02.295232  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:02.324690  528268 cri.go:89] found id: ""
	I1206 10:37:02.324704  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.324711  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:02.324716  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:02.324776  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:02.354165  528268 cri.go:89] found id: ""
	I1206 10:37:02.354179  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.354187  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:02.354192  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:02.354250  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:02.379657  528268 cri.go:89] found id: ""
	I1206 10:37:02.379671  528268 logs.go:282] 0 containers: []
	W1206 10:37:02.379679  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:02.379686  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:02.379697  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:02.449725  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:02.449746  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:02.464766  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:02.464783  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:02.527444  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:02.518942   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.519712   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.521458   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.522038   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.523598   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:02.518942   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.519712   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.521458   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.522038   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:02.523598   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:02.527457  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:02.527467  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:02.595482  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:02.595503  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:05.126581  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:05.136725  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:05.136783  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:05.162008  528268 cri.go:89] found id: ""
	I1206 10:37:05.162022  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.162049  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:05.162055  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:05.162123  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:05.190290  528268 cri.go:89] found id: ""
	I1206 10:37:05.190305  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.190313  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:05.190318  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:05.190399  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:05.222971  528268 cri.go:89] found id: ""
	I1206 10:37:05.223000  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.223008  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:05.223013  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:05.223083  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:05.249192  528268 cri.go:89] found id: ""
	I1206 10:37:05.249206  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.249213  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:05.249218  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:05.249285  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:05.280084  528268 cri.go:89] found id: ""
	I1206 10:37:05.280097  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.280104  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:05.280110  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:05.280176  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:05.306008  528268 cri.go:89] found id: ""
	I1206 10:37:05.306036  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.306044  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:05.306049  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:05.306115  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:05.331829  528268 cri.go:89] found id: ""
	I1206 10:37:05.331843  528268 logs.go:282] 0 containers: []
	W1206 10:37:05.331850  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:05.331858  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:05.331868  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:05.394775  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:05.386653   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.387484   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.389032   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.389488   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.390957   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:05.386653   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.387484   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.389032   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.389488   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:05.390957   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:05.394787  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:05.394798  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:05.463063  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:05.463082  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:05.496791  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:05.496808  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:05.562749  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:05.562768  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:08.077865  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:08.088556  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:08.088628  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:08.114942  528268 cri.go:89] found id: ""
	I1206 10:37:08.114956  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.114963  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:08.114969  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:08.115027  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:08.141141  528268 cri.go:89] found id: ""
	I1206 10:37:08.141155  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.141162  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:08.141167  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:08.141235  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:08.166303  528268 cri.go:89] found id: ""
	I1206 10:37:08.166318  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.166325  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:08.166334  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:08.166394  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:08.199234  528268 cri.go:89] found id: ""
	I1206 10:37:08.199248  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.199255  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:08.199260  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:08.199326  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:08.231753  528268 cri.go:89] found id: ""
	I1206 10:37:08.231767  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.231774  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:08.231780  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:08.231842  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:08.260152  528268 cri.go:89] found id: ""
	I1206 10:37:08.260166  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.260173  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:08.260179  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:08.260241  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:08.285346  528268 cri.go:89] found id: ""
	I1206 10:37:08.285360  528268 logs.go:282] 0 containers: []
	W1206 10:37:08.285367  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:08.285378  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:08.285388  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:08.353719  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:08.353740  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:08.385085  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:08.385101  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:08.459734  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:08.459762  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:08.474846  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:08.474862  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:08.546432  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:08.537844   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.538577   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.540294   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.540933   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.542525   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:08.537844   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.538577   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.540294   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.540933   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:08.542525   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:11.048129  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:11.058654  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:11.058714  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:11.086873  528268 cri.go:89] found id: ""
	I1206 10:37:11.086889  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.086896  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:11.086903  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:11.086965  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:11.113880  528268 cri.go:89] found id: ""
	I1206 10:37:11.113904  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.113912  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:11.113918  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:11.113987  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:11.142338  528268 cri.go:89] found id: ""
	I1206 10:37:11.142361  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.142370  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:11.142375  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:11.142448  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:11.168341  528268 cri.go:89] found id: ""
	I1206 10:37:11.168355  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.168362  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:11.168368  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:11.168425  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:11.218236  528268 cri.go:89] found id: ""
	I1206 10:37:11.218277  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.218285  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:11.218290  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:11.218357  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:11.257366  528268 cri.go:89] found id: ""
	I1206 10:37:11.257379  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.257386  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:11.257391  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:11.257455  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:11.283202  528268 cri.go:89] found id: ""
	I1206 10:37:11.283224  528268 logs.go:282] 0 containers: []
	W1206 10:37:11.283235  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:11.283251  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:11.283269  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:11.349630  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:11.349650  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:11.365578  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:11.365606  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:11.431959  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:11.422904   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.423556   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.425277   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.425941   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.427652   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:11.422904   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.423556   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.425277   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.425941   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:11.427652   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:11.431970  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:11.431981  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:11.502903  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:11.502922  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:14.032953  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:14.043177  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:14.043291  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:14.068855  528268 cri.go:89] found id: ""
	I1206 10:37:14.068870  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.068877  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:14.068882  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:14.068946  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:14.094277  528268 cri.go:89] found id: ""
	I1206 10:37:14.094290  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.094308  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:14.094315  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:14.094372  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:14.119916  528268 cri.go:89] found id: ""
	I1206 10:37:14.119930  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.119948  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:14.119954  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:14.120029  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:14.144999  528268 cri.go:89] found id: ""
	I1206 10:37:14.145012  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.145020  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:14.145026  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:14.145088  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:14.170372  528268 cri.go:89] found id: ""
	I1206 10:37:14.170386  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.170404  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:14.170409  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:14.170475  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:14.220015  528268 cri.go:89] found id: ""
	I1206 10:37:14.220029  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.220036  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:14.220041  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:14.220102  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:14.249187  528268 cri.go:89] found id: ""
	I1206 10:37:14.249201  528268 logs.go:282] 0 containers: []
	W1206 10:37:14.249208  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:14.249216  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:14.249226  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:14.315809  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:14.315830  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:14.331228  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:14.331245  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:14.394665  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:14.386558   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.387326   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.388992   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.389309   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.390775   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:14.386558   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.387326   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.388992   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.389309   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:14.390775   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:14.394676  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:14.394686  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:14.466599  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:14.466623  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:16.996304  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:17.008394  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:17.008453  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:17.036500  528268 cri.go:89] found id: ""
	I1206 10:37:17.036513  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.036521  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:17.036526  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:17.036591  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:17.064759  528268 cri.go:89] found id: ""
	I1206 10:37:17.064773  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.064780  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:17.064785  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:17.064846  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:17.095263  528268 cri.go:89] found id: ""
	I1206 10:37:17.095276  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.095284  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:17.095300  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:17.095364  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:17.121651  528268 cri.go:89] found id: ""
	I1206 10:37:17.121665  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.121673  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:17.121678  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:17.121747  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:17.148683  528268 cri.go:89] found id: ""
	I1206 10:37:17.148697  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.148704  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:17.148711  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:17.148773  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:17.180504  528268 cri.go:89] found id: ""
	I1206 10:37:17.180518  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.180535  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:17.180542  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:17.180611  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:17.208816  528268 cri.go:89] found id: ""
	I1206 10:37:17.208830  528268 logs.go:282] 0 containers: []
	W1206 10:37:17.208837  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:17.208844  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:17.208854  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:17.277798  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:17.277818  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:17.292728  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:17.292743  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:17.366791  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:17.357858   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.358712   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.360589   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.361199   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.362779   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:17.357858   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.358712   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.360589   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.361199   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:17.362779   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:17.366801  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:17.366812  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:17.434192  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:17.434212  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:19.971273  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:19.981226  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:19.981286  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:20.019762  528268 cri.go:89] found id: ""
	I1206 10:37:20.019777  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.019785  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:20.019791  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:20.019866  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:20.047256  528268 cri.go:89] found id: ""
	I1206 10:37:20.047270  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.047278  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:20.047283  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:20.047345  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:20.075694  528268 cri.go:89] found id: ""
	I1206 10:37:20.075708  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.075716  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:20.075721  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:20.075785  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:20.105896  528268 cri.go:89] found id: ""
	I1206 10:37:20.105910  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.105917  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:20.105922  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:20.105981  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:20.131910  528268 cri.go:89] found id: ""
	I1206 10:37:20.131923  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.131930  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:20.131935  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:20.131997  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:20.157115  528268 cri.go:89] found id: ""
	I1206 10:37:20.157129  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.157135  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:20.157140  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:20.157202  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:20.188374  528268 cri.go:89] found id: ""
	I1206 10:37:20.188394  528268 logs.go:282] 0 containers: []
	W1206 10:37:20.188401  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:20.188423  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:20.188434  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:20.267587  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:20.267607  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:20.283222  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:20.283238  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:20.348772  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:20.340427   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.341070   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.342551   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.342988   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.344527   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:20.340427   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.341070   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.342551   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.342988   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:20.344527   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:20.348783  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:20.348796  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:20.415451  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:20.415474  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:22.948223  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:22.959160  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:22.959221  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:22.985131  528268 cri.go:89] found id: ""
	I1206 10:37:22.985144  528268 logs.go:282] 0 containers: []
	W1206 10:37:22.985151  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:22.985156  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:22.985242  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:23.012336  528268 cri.go:89] found id: ""
	I1206 10:37:23.012350  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.012358  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:23.012363  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:23.012433  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:23.037784  528268 cri.go:89] found id: ""
	I1206 10:37:23.037808  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.037816  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:23.037822  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:23.037899  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:23.066240  528268 cri.go:89] found id: ""
	I1206 10:37:23.066254  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.066262  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:23.066267  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:23.066335  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:23.090898  528268 cri.go:89] found id: ""
	I1206 10:37:23.090912  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.090921  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:23.090926  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:23.090993  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:23.116011  528268 cri.go:89] found id: ""
	I1206 10:37:23.116039  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.116047  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:23.116052  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:23.116127  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:23.140768  528268 cri.go:89] found id: ""
	I1206 10:37:23.140781  528268 logs.go:282] 0 containers: []
	W1206 10:37:23.140788  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:23.140796  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:23.140806  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:23.210300  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:23.210319  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:23.229296  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:23.229311  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:23.297415  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:23.288972   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.289757   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.291364   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.291944   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.293619   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:23.288972   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.289757   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.291364   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.291944   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:23.293619   12082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:23.297428  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:23.297438  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:23.364180  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:23.364200  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:25.892120  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:25.902322  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:25.902381  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:25.931154  528268 cri.go:89] found id: ""
	I1206 10:37:25.931168  528268 logs.go:282] 0 containers: []
	W1206 10:37:25.931175  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:25.931180  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:25.931245  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:25.957709  528268 cri.go:89] found id: ""
	I1206 10:37:25.957724  528268 logs.go:282] 0 containers: []
	W1206 10:37:25.957731  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:25.957736  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:25.957793  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:25.985765  528268 cri.go:89] found id: ""
	I1206 10:37:25.985779  528268 logs.go:282] 0 containers: []
	W1206 10:37:25.985786  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:25.985791  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:25.985849  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:26.016739  528268 cri.go:89] found id: ""
	I1206 10:37:26.016859  528268 logs.go:282] 0 containers: []
	W1206 10:37:26.016867  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:26.016873  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:26.016945  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:26.043228  528268 cri.go:89] found id: ""
	I1206 10:37:26.043242  528268 logs.go:282] 0 containers: []
	W1206 10:37:26.043252  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:26.043258  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:26.043331  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:26.069862  528268 cri.go:89] found id: ""
	I1206 10:37:26.069888  528268 logs.go:282] 0 containers: []
	W1206 10:37:26.069896  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:26.069902  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:26.069979  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:26.097635  528268 cri.go:89] found id: ""
	I1206 10:37:26.097651  528268 logs.go:282] 0 containers: []
	W1206 10:37:26.097659  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:26.097666  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:26.097677  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:26.163107  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:26.163132  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:26.177703  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:26.177723  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:26.254904  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:26.246698   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.247514   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.249003   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.249473   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.250911   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:26.246698   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.247514   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.249003   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.249473   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:26.250911   12186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:26.254915  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:26.254927  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:26.322703  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:26.322723  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:28.850178  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:28.860819  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:28.860878  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:28.887162  528268 cri.go:89] found id: ""
	I1206 10:37:28.887175  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.887183  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:28.887188  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:28.887246  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:28.912223  528268 cri.go:89] found id: ""
	I1206 10:37:28.912237  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.912251  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:28.912256  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:28.912318  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:28.937893  528268 cri.go:89] found id: ""
	I1206 10:37:28.937907  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.937914  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:28.937920  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:28.937979  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:28.966798  528268 cri.go:89] found id: ""
	I1206 10:37:28.966812  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.966819  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:28.966825  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:28.966887  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:28.994392  528268 cri.go:89] found id: ""
	I1206 10:37:28.994406  528268 logs.go:282] 0 containers: []
	W1206 10:37:28.994413  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:28.994418  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:28.994480  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:29.020703  528268 cri.go:89] found id: ""
	I1206 10:37:29.020718  528268 logs.go:282] 0 containers: []
	W1206 10:37:29.020725  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:29.020730  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:29.020788  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:29.049956  528268 cri.go:89] found id: ""
	I1206 10:37:29.049969  528268 logs.go:282] 0 containers: []
	W1206 10:37:29.049977  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:29.049986  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:29.049998  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:29.116113  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:29.116133  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:29.130937  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:29.130954  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:29.199649  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:29.191077   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.191848   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.193554   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.193889   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.195340   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:29.191077   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.191848   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.193554   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.193889   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:29.195340   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:29.199659  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:29.199670  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:29.271990  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:29.272011  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:31.801925  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:31.812057  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:31.812130  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:31.837642  528268 cri.go:89] found id: ""
	I1206 10:37:31.837656  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.837663  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:31.837668  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:31.837724  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:31.863706  528268 cri.go:89] found id: ""
	I1206 10:37:31.863721  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.863728  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:31.863733  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:31.863795  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:31.892284  528268 cri.go:89] found id: ""
	I1206 10:37:31.892298  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.892305  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:31.892310  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:31.892370  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:31.920973  528268 cri.go:89] found id: ""
	I1206 10:37:31.920987  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.920994  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:31.920999  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:31.921072  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:31.946196  528268 cri.go:89] found id: ""
	I1206 10:37:31.946209  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.946216  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:31.946221  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:31.946280  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:31.972154  528268 cri.go:89] found id: ""
	I1206 10:37:31.972168  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.972176  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:31.972182  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:31.972273  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:31.998166  528268 cri.go:89] found id: ""
	I1206 10:37:31.998179  528268 logs.go:282] 0 containers: []
	W1206 10:37:31.998194  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:31.998202  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:31.998212  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:32.066002  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:32.066020  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:32.081440  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:32.081456  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:32.155010  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:32.146683   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.147230   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.149014   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.149511   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.151065   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:32.146683   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.147230   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.149014   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.149511   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:32.151065   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:32.155021  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:32.155032  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:32.239005  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:32.239035  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:34.779578  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:34.789994  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:34.790061  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:34.817069  528268 cri.go:89] found id: ""
	I1206 10:37:34.817083  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.817091  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:34.817096  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:34.817154  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:34.843456  528268 cri.go:89] found id: ""
	I1206 10:37:34.843470  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.843478  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:34.843483  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:34.843540  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:34.873150  528268 cri.go:89] found id: ""
	I1206 10:37:34.873164  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.873171  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:34.873176  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:34.873236  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:34.901463  528268 cri.go:89] found id: ""
	I1206 10:37:34.901476  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.901483  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:34.901489  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:34.901546  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:34.930362  528268 cri.go:89] found id: ""
	I1206 10:37:34.930376  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.930383  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:34.930389  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:34.930460  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:34.955907  528268 cri.go:89] found id: ""
	I1206 10:37:34.955920  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.955928  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:34.955936  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:34.955997  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:34.981646  528268 cri.go:89] found id: ""
	I1206 10:37:34.981660  528268 logs.go:282] 0 containers: []
	W1206 10:37:34.981667  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:34.981676  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:34.981690  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:35.051925  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:35.051946  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:35.067379  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:35.067395  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:35.132911  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:35.124444   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.125082   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.126771   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.127367   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.128903   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:35.124444   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.125082   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.126771   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.127367   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:35.128903   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:35.132921  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:35.132932  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:35.203071  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:35.203091  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:37.738787  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:37.749325  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:37.749395  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:37.777933  528268 cri.go:89] found id: ""
	I1206 10:37:37.777947  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.777955  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:37.777961  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:37.778018  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:37.803626  528268 cri.go:89] found id: ""
	I1206 10:37:37.803640  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.803647  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:37.803652  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:37.803711  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:37.829518  528268 cri.go:89] found id: ""
	I1206 10:37:37.829532  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.829540  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:37.829545  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:37.829608  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:37.854832  528268 cri.go:89] found id: ""
	I1206 10:37:37.854846  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.854853  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:37.854858  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:37.854918  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:37.879627  528268 cri.go:89] found id: ""
	I1206 10:37:37.879641  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.879649  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:37.879654  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:37.879712  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:37.906054  528268 cri.go:89] found id: ""
	I1206 10:37:37.906067  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.906074  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:37.906080  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:37.906137  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:37.931611  528268 cri.go:89] found id: ""
	I1206 10:37:37.931624  528268 logs.go:282] 0 containers: []
	W1206 10:37:37.931632  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:37.931640  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:37.931651  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:37.997740  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:37.997760  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:38.023284  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:38.023303  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:38.091986  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:38.082741   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.083460   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.085430   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.086101   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.087877   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:38.082741   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.083460   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.085430   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.086101   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:38.087877   12601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:38.092014  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:38.092027  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:38.163320  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:38.163343  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:40.709445  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:40.720016  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:40.720077  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:40.745539  528268 cri.go:89] found id: ""
	I1206 10:37:40.745554  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.745561  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:40.745566  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:40.745630  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:40.775524  528268 cri.go:89] found id: ""
	I1206 10:37:40.775538  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.775546  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:40.775552  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:40.775612  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:40.800974  528268 cri.go:89] found id: ""
	I1206 10:37:40.800988  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.800995  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:40.801001  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:40.801064  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:40.825855  528268 cri.go:89] found id: ""
	I1206 10:37:40.825869  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.825877  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:40.825882  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:40.825940  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:40.856039  528268 cri.go:89] found id: ""
	I1206 10:37:40.856052  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.856059  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:40.856064  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:40.856129  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:40.886499  528268 cri.go:89] found id: ""
	I1206 10:37:40.886513  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.886520  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:40.886527  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:40.886586  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:40.913975  528268 cri.go:89] found id: ""
	I1206 10:37:40.913989  528268 logs.go:282] 0 containers: []
	W1206 10:37:40.913996  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:40.914004  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:40.914014  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:40.979882  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:40.979904  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:40.995137  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:40.995155  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:41.060228  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:41.051325   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.052002   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.053633   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.054141   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.055869   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:41.051325   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.052002   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.053633   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.054141   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:41.055869   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:41.060245  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:41.060258  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:41.130025  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:41.130046  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:43.659238  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:43.669354  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:43.669430  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:43.694872  528268 cri.go:89] found id: ""
	I1206 10:37:43.694886  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.694893  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:43.694899  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:43.694956  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:43.720265  528268 cri.go:89] found id: ""
	I1206 10:37:43.720278  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.720286  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:43.720290  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:43.720349  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:43.746213  528268 cri.go:89] found id: ""
	I1206 10:37:43.746226  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.746234  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:43.746239  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:43.746300  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:43.771902  528268 cri.go:89] found id: ""
	I1206 10:37:43.771916  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.771923  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:43.771928  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:43.771984  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:43.797840  528268 cri.go:89] found id: ""
	I1206 10:37:43.797854  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.797874  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:43.797879  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:43.797949  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:43.823569  528268 cri.go:89] found id: ""
	I1206 10:37:43.823583  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.823590  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:43.823596  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:43.823654  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:43.850154  528268 cri.go:89] found id: ""
	I1206 10:37:43.850169  528268 logs.go:282] 0 containers: []
	W1206 10:37:43.850187  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:43.850196  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:43.850207  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:43.919668  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:43.919690  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:43.954253  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:43.954269  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:44.019533  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:44.019556  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:44.034911  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:44.034930  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:44.098130  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:44.089450   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.090461   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.091451   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.092313   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.093171   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:44.089450   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.090461   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.091451   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.092313   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:44.093171   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:46.599796  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:46.610343  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:46.610410  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:46.637289  528268 cri.go:89] found id: ""
	I1206 10:37:46.637304  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.637311  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:46.637317  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:46.637380  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:46.664098  528268 cri.go:89] found id: ""
	I1206 10:37:46.664112  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.664118  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:46.664123  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:46.664183  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:46.693606  528268 cri.go:89] found id: ""
	I1206 10:37:46.693619  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.693638  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:46.693644  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:46.693718  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:46.719425  528268 cri.go:89] found id: ""
	I1206 10:37:46.719438  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.719445  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:46.719451  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:46.719511  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:46.748960  528268 cri.go:89] found id: ""
	I1206 10:37:46.748974  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.748982  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:46.748987  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:46.749047  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:46.782749  528268 cri.go:89] found id: ""
	I1206 10:37:46.782763  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.782770  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:46.782776  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:46.782846  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:46.807615  528268 cri.go:89] found id: ""
	I1206 10:37:46.807629  528268 logs.go:282] 0 containers: []
	W1206 10:37:46.807636  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:46.807644  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:46.807654  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:46.838618  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:46.838634  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:46.905518  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:46.905537  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:46.920399  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:46.920417  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:46.985957  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:46.978179   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.978741   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.980269   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.980715   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.982218   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:46.978179   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.978741   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.980269   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.980715   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:46.982218   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:46.985968  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:46.985981  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:49.555258  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:49.565209  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:49.565266  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:49.593833  528268 cri.go:89] found id: ""
	I1206 10:37:49.593846  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.593853  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:49.593858  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:49.593914  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:49.621098  528268 cri.go:89] found id: ""
	I1206 10:37:49.621111  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.621119  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:49.621124  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:49.621203  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:49.645669  528268 cri.go:89] found id: ""
	I1206 10:37:49.645681  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.645689  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:49.645694  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:49.645750  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:49.672058  528268 cri.go:89] found id: ""
	I1206 10:37:49.672072  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.672080  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:49.672085  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:49.672140  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:49.696988  528268 cri.go:89] found id: ""
	I1206 10:37:49.697002  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.697009  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:49.697015  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:49.697076  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:49.723261  528268 cri.go:89] found id: ""
	I1206 10:37:49.723275  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.723282  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:49.723287  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:49.723357  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:49.750307  528268 cri.go:89] found id: ""
	I1206 10:37:49.750321  528268 logs.go:282] 0 containers: []
	W1206 10:37:49.750328  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:49.750336  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:49.750346  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:49.765699  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:49.765721  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:49.827929  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:49.819281   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.820177   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.821896   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.822193   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.823677   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:49.819281   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.820177   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.821896   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.822193   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:49.823677   13021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:49.827938  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:49.827962  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:49.899802  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:49.899820  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:49.928018  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:49.928035  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:52.495744  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:52.505888  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:52.505958  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:52.532610  528268 cri.go:89] found id: ""
	I1206 10:37:52.532623  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.532631  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:52.532636  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:52.532695  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:52.558679  528268 cri.go:89] found id: ""
	I1206 10:37:52.558692  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.558700  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:52.558705  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:52.558762  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:52.585203  528268 cri.go:89] found id: ""
	I1206 10:37:52.585217  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.585225  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:52.585230  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:52.585286  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:52.611483  528268 cri.go:89] found id: ""
	I1206 10:37:52.611496  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.611503  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:52.611510  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:52.611568  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:52.638054  528268 cri.go:89] found id: ""
	I1206 10:37:52.638067  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.638075  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:52.638080  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:52.638137  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:52.666746  528268 cri.go:89] found id: ""
	I1206 10:37:52.666760  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.666767  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:52.666773  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:52.666833  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:52.691974  528268 cri.go:89] found id: ""
	I1206 10:37:52.691997  528268 logs.go:282] 0 containers: []
	W1206 10:37:52.692005  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:52.692015  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:52.692025  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:52.761093  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:52.761113  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:52.790376  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:52.790392  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:52.858897  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:52.858915  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:52.873906  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:52.873923  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:52.937907  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:52.929773   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.930648   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.932194   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.932561   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.934055   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:52.929773   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.930648   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.932194   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.932561   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:52.934055   13143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:55.439279  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:55.450466  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:55.450529  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:55.483494  528268 cri.go:89] found id: ""
	I1206 10:37:55.483508  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.483515  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:55.483520  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:55.483576  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:55.515860  528268 cri.go:89] found id: ""
	I1206 10:37:55.515874  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.515881  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:55.515886  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:55.515942  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:55.542224  528268 cri.go:89] found id: ""
	I1206 10:37:55.542239  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.542248  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:55.542253  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:55.542311  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:55.567547  528268 cri.go:89] found id: ""
	I1206 10:37:55.567561  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.567568  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:55.567574  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:55.567630  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:55.594478  528268 cri.go:89] found id: ""
	I1206 10:37:55.594491  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.594499  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:55.594505  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:55.594568  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:55.620118  528268 cri.go:89] found id: ""
	I1206 10:37:55.620132  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.620146  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:55.620151  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:55.620210  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:55.644692  528268 cri.go:89] found id: ""
	I1206 10:37:55.644706  528268 logs.go:282] 0 containers: []
	W1206 10:37:55.644713  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:55.644721  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:55.644732  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:55.712056  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:55.702146   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.702755   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.704324   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.704667   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.708009   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:55.702146   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.702755   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.704324   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.704667   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:55.708009   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:55.712075  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:55.712085  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:55.782393  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:55.782414  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:37:55.817896  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:55.817913  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:55.892357  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:55.892385  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:58.407847  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:58.417968  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:37:58.418026  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:37:58.446859  528268 cri.go:89] found id: ""
	I1206 10:37:58.446872  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.446879  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:37:58.446884  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:37:58.446946  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:37:58.475161  528268 cri.go:89] found id: ""
	I1206 10:37:58.475175  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.475182  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:37:58.475187  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:37:58.475244  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:37:58.503498  528268 cri.go:89] found id: ""
	I1206 10:37:58.503513  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.503520  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:37:58.503525  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:37:58.503583  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:37:58.529955  528268 cri.go:89] found id: ""
	I1206 10:37:58.529970  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.529977  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:37:58.529983  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:37:58.530038  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:37:58.557174  528268 cri.go:89] found id: ""
	I1206 10:37:58.557188  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.557196  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:37:58.557201  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:37:58.557259  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:37:58.586116  528268 cri.go:89] found id: ""
	I1206 10:37:58.586130  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.586149  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:37:58.586156  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:37:58.586211  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:37:58.620339  528268 cri.go:89] found id: ""
	I1206 10:37:58.620353  528268 logs.go:282] 0 containers: []
	W1206 10:37:58.620361  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:37:58.620368  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:37:58.620379  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:37:58.686086  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:37:58.686105  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:37:58.700471  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:37:58.700487  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:37:58.772759  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:58.764751   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.765482   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.767041   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.767492   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.769066   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:37:58.764751   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.765482   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.767041   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.767492   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:58.769066   13337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:37:58.772768  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:37:58.772779  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:37:58.841699  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:37:58.841718  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:01.372136  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:01.382712  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:01.382776  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:01.410577  528268 cri.go:89] found id: ""
	I1206 10:38:01.410591  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.410598  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:01.410603  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:01.410666  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:01.444228  528268 cri.go:89] found id: ""
	I1206 10:38:01.444251  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.444258  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:01.444264  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:01.444331  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:01.486632  528268 cri.go:89] found id: ""
	I1206 10:38:01.486645  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.486652  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:01.486657  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:01.486717  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:01.518190  528268 cri.go:89] found id: ""
	I1206 10:38:01.518203  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.518210  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:01.518215  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:01.518276  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:01.543942  528268 cri.go:89] found id: ""
	I1206 10:38:01.543956  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.543963  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:01.543968  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:01.544032  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:01.569769  528268 cri.go:89] found id: ""
	I1206 10:38:01.569803  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.569832  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:01.569845  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:01.569902  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:01.594441  528268 cri.go:89] found id: ""
	I1206 10:38:01.594456  528268 logs.go:282] 0 containers: []
	W1206 10:38:01.594463  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:01.594471  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:01.594482  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:01.609124  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:01.609139  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:01.671291  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:01.663080   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.663834   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.665465   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.665773   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.667299   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:01.663080   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.663834   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.665465   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.665773   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:01.667299   13440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:01.671302  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:01.671312  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:01.739749  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:01.739769  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:01.768671  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:01.768687  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:04.339038  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:04.349363  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:04.349432  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:04.375032  528268 cri.go:89] found id: ""
	I1206 10:38:04.375045  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.375052  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:04.375058  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:04.375139  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:04.399997  528268 cri.go:89] found id: ""
	I1206 10:38:04.400011  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.400018  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:04.400023  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:04.400081  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:04.424851  528268 cri.go:89] found id: ""
	I1206 10:38:04.424876  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.424884  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:04.424889  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:04.424959  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:04.453149  528268 cri.go:89] found id: ""
	I1206 10:38:04.453162  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.453170  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:04.453175  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:04.453263  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:04.483514  528268 cri.go:89] found id: ""
	I1206 10:38:04.483527  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.483534  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:04.483540  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:04.483598  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:04.511967  528268 cri.go:89] found id: ""
	I1206 10:38:04.511980  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.511987  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:04.511993  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:04.512048  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:04.541164  528268 cri.go:89] found id: ""
	I1206 10:38:04.541175  528268 logs.go:282] 0 containers: []
	W1206 10:38:04.541182  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:04.541190  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:04.541199  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:04.575975  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:04.575991  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:04.642763  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:04.642781  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:04.657313  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:04.657336  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:04.721928  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:04.713076   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.713820   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.715564   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.716200   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.717981   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:04.713076   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.713820   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.715564   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.716200   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:04.717981   13559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:04.721939  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:04.721952  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:07.293453  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:07.303645  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:07.303708  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:07.329285  528268 cri.go:89] found id: ""
	I1206 10:38:07.329299  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.329306  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:07.329313  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:07.329371  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:07.354889  528268 cri.go:89] found id: ""
	I1206 10:38:07.354903  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.354911  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:07.354916  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:07.354975  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:07.380496  528268 cri.go:89] found id: ""
	I1206 10:38:07.380510  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.380518  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:07.380523  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:07.380583  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:07.408252  528268 cri.go:89] found id: ""
	I1206 10:38:07.408265  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.408272  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:07.408278  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:07.408341  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:07.434563  528268 cri.go:89] found id: ""
	I1206 10:38:07.434577  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.434584  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:07.434590  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:07.434656  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:07.465668  528268 cri.go:89] found id: ""
	I1206 10:38:07.465681  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.465688  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:07.465694  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:07.465755  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:07.496206  528268 cri.go:89] found id: ""
	I1206 10:38:07.496220  528268 logs.go:282] 0 containers: []
	W1206 10:38:07.496227  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:07.496252  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:07.496291  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:07.561228  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:07.561250  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:07.576434  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:07.576450  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:07.645534  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:07.637588   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.638151   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.639755   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.640208   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.641673   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:07.637588   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.638151   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.639755   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.640208   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:07.641673   13651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:07.645544  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:07.645555  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:07.713688  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:07.713708  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:10.250054  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:10.260518  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:10.260577  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:10.287264  528268 cri.go:89] found id: ""
	I1206 10:38:10.287283  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.287291  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:10.287296  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:10.287358  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:10.312333  528268 cri.go:89] found id: ""
	I1206 10:38:10.312347  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.312355  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:10.312360  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:10.312420  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:10.336978  528268 cri.go:89] found id: ""
	I1206 10:38:10.336993  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.337000  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:10.337004  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:10.337069  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:10.363441  528268 cri.go:89] found id: ""
	I1206 10:38:10.363455  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.363463  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:10.363468  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:10.363526  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:10.388225  528268 cri.go:89] found id: ""
	I1206 10:38:10.388245  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.388253  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:10.388259  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:10.388320  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:10.414362  528268 cri.go:89] found id: ""
	I1206 10:38:10.414375  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.414382  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:10.414388  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:10.414445  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:10.454478  528268 cri.go:89] found id: ""
	I1206 10:38:10.454491  528268 logs.go:282] 0 containers: []
	W1206 10:38:10.454499  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:10.454508  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:10.454518  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:10.524830  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:10.524851  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:10.540277  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:10.540292  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:10.607931  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:10.599410   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.600137   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.601764   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.602052   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.604157   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:10.599410   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.600137   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.601764   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.602052   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:10.604157   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:10.607942  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:10.607955  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:10.675104  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:10.675134  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:13.206837  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:13.217943  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:13.218002  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:13.243670  528268 cri.go:89] found id: ""
	I1206 10:38:13.243684  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.243691  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:13.243697  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:13.243758  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:13.268428  528268 cri.go:89] found id: ""
	I1206 10:38:13.268443  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.268450  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:13.268455  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:13.268512  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:13.294024  528268 cri.go:89] found id: ""
	I1206 10:38:13.294038  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.294045  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:13.294050  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:13.294106  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:13.321522  528268 cri.go:89] found id: ""
	I1206 10:38:13.321536  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.321543  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:13.321548  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:13.321610  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:13.351214  528268 cri.go:89] found id: ""
	I1206 10:38:13.351228  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.351235  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:13.351240  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:13.351299  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:13.376433  528268 cri.go:89] found id: ""
	I1206 10:38:13.376447  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.376454  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:13.376459  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:13.376520  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:13.405980  528268 cri.go:89] found id: ""
	I1206 10:38:13.405994  528268 logs.go:282] 0 containers: []
	W1206 10:38:13.406001  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:13.406009  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:13.406019  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:13.481314  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:13.481334  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:13.503361  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:13.503378  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:13.570756  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:13.562069   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.562777   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.564575   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.565306   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.566790   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:13.562069   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.562777   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.564575   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.565306   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:13.566790   13862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:13.570765  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:13.570778  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:13.641258  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:13.641282  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:16.171913  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:16.182483  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:16.182545  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:16.210129  528268 cri.go:89] found id: ""
	I1206 10:38:16.210143  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.210151  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:16.210156  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:16.210217  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:16.237040  528268 cri.go:89] found id: ""
	I1206 10:38:16.237060  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.237067  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:16.237073  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:16.237134  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:16.263801  528268 cri.go:89] found id: ""
	I1206 10:38:16.263815  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.263822  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:16.263827  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:16.263886  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:16.289263  528268 cri.go:89] found id: ""
	I1206 10:38:16.289277  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.289284  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:16.289289  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:16.289347  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:16.317849  528268 cri.go:89] found id: ""
	I1206 10:38:16.317862  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.317870  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:16.317875  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:16.317933  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:16.347303  528268 cri.go:89] found id: ""
	I1206 10:38:16.347317  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.347324  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:16.347329  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:16.347387  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:16.373512  528268 cri.go:89] found id: ""
	I1206 10:38:16.373525  528268 logs.go:282] 0 containers: []
	W1206 10:38:16.373542  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:16.373552  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:16.373568  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:16.438751  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:16.438769  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:16.455447  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:16.455463  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:16.527176  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:16.518992   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.519800   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.521522   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.522056   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.523116   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:16.518992   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.519800   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.521522   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.522056   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:16.523116   13966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:16.527186  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:16.527196  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:16.595033  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:16.595053  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:19.127162  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:19.137626  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:19.137685  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:19.168715  528268 cri.go:89] found id: ""
	I1206 10:38:19.168729  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.168736  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:19.168741  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:19.168798  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:19.199324  528268 cri.go:89] found id: ""
	I1206 10:38:19.199341  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.199354  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:19.199359  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:19.199418  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:19.225589  528268 cri.go:89] found id: ""
	I1206 10:38:19.225601  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.225608  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:19.225613  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:19.225670  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:19.251399  528268 cri.go:89] found id: ""
	I1206 10:38:19.251412  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.251420  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:19.251425  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:19.251488  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:19.276108  528268 cri.go:89] found id: ""
	I1206 10:38:19.276122  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.276129  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:19.276134  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:19.276193  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:19.301269  528268 cri.go:89] found id: ""
	I1206 10:38:19.301282  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.301290  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:19.301295  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:19.301352  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:19.327537  528268 cri.go:89] found id: ""
	I1206 10:38:19.327552  528268 logs.go:282] 0 containers: []
	W1206 10:38:19.327559  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:19.327568  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:19.327578  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:19.398088  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:19.398114  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:19.413590  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:19.413609  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:19.517843  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:19.509322   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.509746   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.511448   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.511962   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.513543   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:19.509322   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.509746   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.511448   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.511962   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:19.513543   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:19.517853  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:19.517866  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:19.587464  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:19.587485  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:22.115984  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:22.126048  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:22.126111  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:22.152880  528268 cri.go:89] found id: ""
	I1206 10:38:22.152893  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.152900  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:22.152905  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:22.152961  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:22.179175  528268 cri.go:89] found id: ""
	I1206 10:38:22.179190  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.179197  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:22.179202  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:22.179263  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:22.204543  528268 cri.go:89] found id: ""
	I1206 10:38:22.204557  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.204565  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:22.204570  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:22.204631  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:22.229269  528268 cri.go:89] found id: ""
	I1206 10:38:22.229283  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.229291  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:22.229296  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:22.229353  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:22.255404  528268 cri.go:89] found id: ""
	I1206 10:38:22.255418  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.255425  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:22.255430  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:22.255488  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:22.280965  528268 cri.go:89] found id: ""
	I1206 10:38:22.280981  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.280988  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:22.280994  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:22.281052  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:22.309901  528268 cri.go:89] found id: ""
	I1206 10:38:22.309915  528268 logs.go:282] 0 containers: []
	W1206 10:38:22.309922  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:22.309930  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:22.309940  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:22.382110  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:22.382130  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:22.412045  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:22.412060  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:22.485902  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:22.485921  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:22.501637  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:22.501655  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:22.572937  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:22.565172   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.565547   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.567025   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.567515   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.569137   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:22.565172   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.565547   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.567025   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.567515   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:22.569137   14196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:25.074598  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:25.085017  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:25.085084  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:25.110479  528268 cri.go:89] found id: ""
	I1206 10:38:25.110493  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.110500  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:25.110506  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:25.110566  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:25.137467  528268 cri.go:89] found id: ""
	I1206 10:38:25.137481  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.137488  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:25.137493  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:25.137552  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:25.163017  528268 cri.go:89] found id: ""
	I1206 10:38:25.163033  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.163040  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:25.163046  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:25.163105  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:25.193876  528268 cri.go:89] found id: ""
	I1206 10:38:25.193890  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.193898  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:25.193903  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:25.193966  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:25.220362  528268 cri.go:89] found id: ""
	I1206 10:38:25.220376  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.220383  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:25.220388  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:25.220444  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:25.246057  528268 cri.go:89] found id: ""
	I1206 10:38:25.246070  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.246078  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:25.246083  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:25.246140  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:25.273646  528268 cri.go:89] found id: ""
	I1206 10:38:25.273660  528268 logs.go:282] 0 containers: []
	W1206 10:38:25.273667  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:25.273675  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:25.273691  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:25.341507  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:25.341527  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:25.356890  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:25.356906  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:25.432607  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:25.423528   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.424336   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.425943   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.426718   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.428396   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:25.423528   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.424336   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.425943   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.426718   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:25.428396   14283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:25.432617  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:25.432628  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:25.515030  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:25.515052  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:28.053670  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:28.064577  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:28.064641  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:28.091082  528268 cri.go:89] found id: ""
	I1206 10:38:28.091097  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.091106  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:28.091111  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:28.091205  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:28.116793  528268 cri.go:89] found id: ""
	I1206 10:38:28.116808  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.116815  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:28.116822  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:28.116881  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:28.145938  528268 cri.go:89] found id: ""
	I1206 10:38:28.145952  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.145960  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:28.145965  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:28.146025  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:28.171742  528268 cri.go:89] found id: ""
	I1206 10:38:28.171755  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.171763  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:28.171768  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:28.171826  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:28.197528  528268 cri.go:89] found id: ""
	I1206 10:38:28.197542  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.197549  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:28.197554  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:28.197613  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:28.224277  528268 cri.go:89] found id: ""
	I1206 10:38:28.224291  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.224298  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:28.224303  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:28.224368  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:28.252201  528268 cri.go:89] found id: ""
	I1206 10:38:28.252215  528268 logs.go:282] 0 containers: []
	W1206 10:38:28.252223  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:28.252237  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:28.252248  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:28.284626  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:28.284642  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:28.351035  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:28.351055  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:28.366043  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:28.366061  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:28.437473  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:28.427946   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.428958   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.430082   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.430867   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.432711   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:28.427946   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.428958   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.430082   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.430867   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:28.432711   14397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:28.437483  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:28.437506  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:31.019982  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:31.030426  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:31.030488  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:31.055406  528268 cri.go:89] found id: ""
	I1206 10:38:31.055419  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.055427  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:31.055432  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:31.055490  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:31.081639  528268 cri.go:89] found id: ""
	I1206 10:38:31.081653  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.081660  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:31.081665  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:31.081729  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:31.111871  528268 cri.go:89] found id: ""
	I1206 10:38:31.111886  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.111894  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:31.111899  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:31.111959  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:31.142949  528268 cri.go:89] found id: ""
	I1206 10:38:31.142964  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.142971  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:31.142977  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:31.143042  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:31.169930  528268 cri.go:89] found id: ""
	I1206 10:38:31.169946  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.169954  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:31.169959  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:31.170020  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:31.196019  528268 cri.go:89] found id: ""
	I1206 10:38:31.196033  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.196041  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:31.196046  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:31.196104  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:31.226526  528268 cri.go:89] found id: ""
	I1206 10:38:31.226540  528268 logs.go:282] 0 containers: []
	W1206 10:38:31.226547  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:31.226556  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:31.226567  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:31.289723  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:31.280542   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.281325   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.283214   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.283972   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.285734   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:31.280542   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.281325   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.283214   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.283972   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.285734   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:31.289733  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:31.289746  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:31.358922  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:31.358941  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:31.387252  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:31.387268  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:31.460730  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:31.460749  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:33.977403  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:33.987866  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:33.987933  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:34.023637  528268 cri.go:89] found id: ""
	I1206 10:38:34.023651  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.023659  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:34.023664  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:34.023728  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:34.052242  528268 cri.go:89] found id: ""
	I1206 10:38:34.052256  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.052263  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:34.052269  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:34.052330  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:34.077707  528268 cri.go:89] found id: ""
	I1206 10:38:34.077721  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.077728  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:34.077734  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:34.077795  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:34.103066  528268 cri.go:89] found id: ""
	I1206 10:38:34.103079  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.103098  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:34.103103  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:34.103185  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:34.132994  528268 cri.go:89] found id: ""
	I1206 10:38:34.133007  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.133015  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:34.133020  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:34.133081  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:34.159017  528268 cri.go:89] found id: ""
	I1206 10:38:34.159030  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.159038  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:34.159043  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:34.159101  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:34.185998  528268 cri.go:89] found id: ""
	I1206 10:38:34.186012  528268 logs.go:282] 0 containers: []
	W1206 10:38:34.186020  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:34.186028  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:34.186042  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:34.257644  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:34.257664  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:34.273073  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:34.273092  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:34.344235  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:34.334637   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.335521   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.337181   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.337760   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.339605   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:34.334637   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.335521   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.337181   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.337760   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.339605   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:34.344247  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:34.344260  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:34.414848  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:34.414867  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:36.966180  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:36.976392  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:36.976457  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:37.002549  528268 cri.go:89] found id: ""
	I1206 10:38:37.002566  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.002574  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:37.002580  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:37.002657  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:37.033009  528268 cri.go:89] found id: ""
	I1206 10:38:37.033024  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.033031  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:37.033037  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:37.033106  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:37.059257  528268 cri.go:89] found id: ""
	I1206 10:38:37.059271  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.059279  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:37.059285  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:37.059346  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:37.090436  528268 cri.go:89] found id: ""
	I1206 10:38:37.090449  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.090457  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:37.090462  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:37.090523  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:37.118194  528268 cri.go:89] found id: ""
	I1206 10:38:37.118208  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.118215  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:37.118222  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:37.118284  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:37.144022  528268 cri.go:89] found id: ""
	I1206 10:38:37.144036  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.144044  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:37.144049  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:37.144107  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:37.168416  528268 cri.go:89] found id: ""
	I1206 10:38:37.168430  528268 logs.go:282] 0 containers: []
	W1206 10:38:37.168438  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:37.168445  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:37.168456  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:37.234878  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:37.234898  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:37.250351  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:37.250374  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:37.316139  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:37.307238   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.308163   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.309976   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.310399   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.312153   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:37.307238   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.308163   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.309976   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.310399   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.312153   14702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:37.316149  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:37.316159  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:37.385780  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:37.385800  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:39.916327  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:39.926345  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:39.926412  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:39.953639  528268 cri.go:89] found id: ""
	I1206 10:38:39.953652  528268 logs.go:282] 0 containers: []
	W1206 10:38:39.953660  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:39.953671  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:39.953732  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:39.979049  528268 cri.go:89] found id: ""
	I1206 10:38:39.979064  528268 logs.go:282] 0 containers: []
	W1206 10:38:39.979072  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:39.979077  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:39.979164  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:40.013684  528268 cri.go:89] found id: ""
	I1206 10:38:40.013700  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.013708  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:40.013714  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:40.013783  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:40.052804  528268 cri.go:89] found id: ""
	I1206 10:38:40.052820  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.052828  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:40.052834  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:40.052902  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:40.084356  528268 cri.go:89] found id: ""
	I1206 10:38:40.084372  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.084380  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:40.084386  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:40.084451  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:40.112282  528268 cri.go:89] found id: ""
	I1206 10:38:40.112297  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.112304  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:40.112312  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:40.112373  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:40.140065  528268 cri.go:89] found id: ""
	I1206 10:38:40.140080  528268 logs.go:282] 0 containers: []
	W1206 10:38:40.140087  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:40.140094  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:40.140108  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:40.208521  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:40.199450   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.200296   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.202102   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.202795   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.204574   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:40.199450   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.200296   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.202102   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.202795   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.204574   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:40.208530  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:40.208541  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:40.280105  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:40.280126  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:40.313393  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:40.313409  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:40.380769  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:40.380789  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:42.896735  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:42.906913  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:42.906971  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:42.932466  528268 cri.go:89] found id: ""
	I1206 10:38:42.932480  528268 logs.go:282] 0 containers: []
	W1206 10:38:42.932493  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:42.932499  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:42.932560  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:42.962618  528268 cri.go:89] found id: ""
	I1206 10:38:42.962633  528268 logs.go:282] 0 containers: []
	W1206 10:38:42.962641  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:42.962647  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:42.962704  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:42.989497  528268 cri.go:89] found id: ""
	I1206 10:38:42.989511  528268 logs.go:282] 0 containers: []
	W1206 10:38:42.989519  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:42.989525  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:42.989581  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:43.016798  528268 cri.go:89] found id: ""
	I1206 10:38:43.016818  528268 logs.go:282] 0 containers: []
	W1206 10:38:43.016825  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:43.016831  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:43.017042  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:43.044571  528268 cri.go:89] found id: ""
	I1206 10:38:43.044589  528268 logs.go:282] 0 containers: []
	W1206 10:38:43.044599  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:43.044606  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:43.044679  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:43.072240  528268 cri.go:89] found id: ""
	I1206 10:38:43.072256  528268 logs.go:282] 0 containers: []
	W1206 10:38:43.072264  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:43.072269  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:43.072330  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:43.098196  528268 cri.go:89] found id: ""
	I1206 10:38:43.098211  528268 logs.go:282] 0 containers: []
	W1206 10:38:43.098218  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:43.098225  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:43.098237  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:43.113559  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:43.113577  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:43.177585  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:43.169460   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.169877   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.171569   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.172135   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.173643   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:43.169460   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.169877   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.171569   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.172135   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.173643   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:43.177595  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:43.177606  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:43.251189  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:43.251210  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:43.278658  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:43.278673  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:45.849509  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:45.861204  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:45.861266  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:45.888209  528268 cri.go:89] found id: ""
	I1206 10:38:45.888228  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.888236  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:45.888241  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:45.888306  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:45.913344  528268 cri.go:89] found id: ""
	I1206 10:38:45.913357  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.913365  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:45.913370  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:45.913429  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:45.939830  528268 cri.go:89] found id: ""
	I1206 10:38:45.939844  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.939852  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:45.939857  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:45.939927  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:45.964893  528268 cri.go:89] found id: ""
	I1206 10:38:45.964907  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.964914  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:45.964920  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:45.964984  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:45.991528  528268 cri.go:89] found id: ""
	I1206 10:38:45.991540  528268 logs.go:282] 0 containers: []
	W1206 10:38:45.991548  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:45.991553  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:45.991614  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:46.018162  528268 cri.go:89] found id: ""
	I1206 10:38:46.018176  528268 logs.go:282] 0 containers: []
	W1206 10:38:46.018184  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:46.018190  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:46.018249  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:46.045784  528268 cri.go:89] found id: ""
	I1206 10:38:46.045807  528268 logs.go:282] 0 containers: []
	W1206 10:38:46.045814  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:46.045822  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:46.045833  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:46.114786  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:46.105174   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.106040   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.107658   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.108307   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.110017   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:46.105174   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.106040   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.107658   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.108307   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.110017   15003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:46.114796  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:46.114808  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:46.185171  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:46.185193  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:46.213442  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:46.213458  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:46.280354  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:46.280374  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:48.796511  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:48.807012  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:48.807073  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:48.832313  528268 cri.go:89] found id: ""
	I1206 10:38:48.832337  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.832344  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:48.832349  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:48.832420  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:48.857914  528268 cri.go:89] found id: ""
	I1206 10:38:48.857928  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.857935  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:48.857940  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:48.858000  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:48.887721  528268 cri.go:89] found id: ""
	I1206 10:38:48.887735  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.887743  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:48.887748  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:48.887808  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:48.912329  528268 cri.go:89] found id: ""
	I1206 10:38:48.912343  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.912351  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:48.912356  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:48.912416  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:48.942323  528268 cri.go:89] found id: ""
	I1206 10:38:48.942337  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.942344  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:48.942349  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:48.942408  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:48.971776  528268 cri.go:89] found id: ""
	I1206 10:38:48.971790  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.971798  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:48.971803  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:48.971861  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:48.997054  528268 cri.go:89] found id: ""
	I1206 10:38:48.997068  528268 logs.go:282] 0 containers: []
	W1206 10:38:48.997076  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:48.997084  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:48.997095  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:49.071387  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:49.071413  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:49.099724  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:49.099743  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:49.165471  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:49.165492  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:49.180707  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:49.180755  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:49.246459  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:49.238180   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.239038   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.240759   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.241079   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.242605   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:49.238180   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.239038   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.240759   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.241079   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.242605   15130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:51.747477  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:51.757424  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:51.757483  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:51.785368  528268 cri.go:89] found id: ""
	I1206 10:38:51.785382  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.785390  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:51.785395  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:51.785452  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:51.814468  528268 cri.go:89] found id: ""
	I1206 10:38:51.814482  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.814489  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:51.814494  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:51.814553  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:51.839897  528268 cri.go:89] found id: ""
	I1206 10:38:51.839911  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.839918  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:51.839923  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:51.839980  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:51.865924  528268 cri.go:89] found id: ""
	I1206 10:38:51.865938  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.865951  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:51.865956  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:51.866011  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:51.891688  528268 cri.go:89] found id: ""
	I1206 10:38:51.891702  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.891709  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:51.891714  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:51.891772  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:51.917048  528268 cri.go:89] found id: ""
	I1206 10:38:51.917062  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.917070  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:51.917075  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:51.917132  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:51.942873  528268 cri.go:89] found id: ""
	I1206 10:38:51.942888  528268 logs.go:282] 0 containers: []
	W1206 10:38:51.942895  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:51.942903  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:51.942914  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:52.011199  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:52.001318   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.002485   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.003254   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.005112   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.005720   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:52.001318   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.002485   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.003254   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.005112   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.005720   15218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:52.011209  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:52.011220  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:52.085464  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:52.085485  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:52.119213  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:52.119230  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:52.189731  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:52.189751  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:54.705436  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:54.717135  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:54.717196  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:54.755081  528268 cri.go:89] found id: ""
	I1206 10:38:54.755095  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.755105  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:54.755110  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:54.755209  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:54.780971  528268 cri.go:89] found id: ""
	I1206 10:38:54.780985  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.780993  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:54.780998  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:54.781060  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:54.806877  528268 cri.go:89] found id: ""
	I1206 10:38:54.806891  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.806898  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:54.806904  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:54.806967  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:54.832627  528268 cri.go:89] found id: ""
	I1206 10:38:54.832641  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.832649  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:54.832654  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:54.832711  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:54.857814  528268 cri.go:89] found id: ""
	I1206 10:38:54.857828  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.857836  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:54.857841  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:54.857897  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:54.883738  528268 cri.go:89] found id: ""
	I1206 10:38:54.883752  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.883759  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:54.883764  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:54.883821  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:54.909479  528268 cri.go:89] found id: ""
	I1206 10:38:54.909493  528268 logs.go:282] 0 containers: []
	W1206 10:38:54.909500  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:54.909508  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:54.909519  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:54.975629  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:54.975651  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:54.991150  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:54.991166  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:55.064619  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:55.054168   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.054825   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.058121   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.058810   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.060748   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:55.054168   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.054825   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.058121   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.058810   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.060748   15329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:55.064628  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:55.064639  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:38:55.134387  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:55.134406  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:57.664428  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:57.675264  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:57.675328  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:57.709021  528268 cri.go:89] found id: ""
	I1206 10:38:57.709035  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.709043  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:57.709048  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:38:57.709116  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:57.744132  528268 cri.go:89] found id: ""
	I1206 10:38:57.744146  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.744153  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:38:57.744159  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:38:57.744226  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:57.778746  528268 cri.go:89] found id: ""
	I1206 10:38:57.778760  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.778767  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:38:57.778772  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:57.778829  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:57.805263  528268 cri.go:89] found id: ""
	I1206 10:38:57.805276  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.805284  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:57.805289  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:57.805348  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:57.831152  528268 cri.go:89] found id: ""
	I1206 10:38:57.831166  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.831173  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:57.831178  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:57.831240  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:57.857097  528268 cri.go:89] found id: ""
	I1206 10:38:57.857111  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.857119  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:57.857124  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:57.857189  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:57.882945  528268 cri.go:89] found id: ""
	I1206 10:38:57.882984  528268 logs.go:282] 0 containers: []
	W1206 10:38:57.882992  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:57.883000  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:38:57.883011  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:57.915176  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:57.915193  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:57.981939  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:57.981958  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:57.997358  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:57.997373  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:58.070527  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:58.061092   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.061631   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.063614   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.064325   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.065286   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:58.061092   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.061631   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.063614   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.064325   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.065286   15449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:58.070538  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:38:58.070549  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:00.641789  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:00.651800  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:00.651859  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:00.679593  528268 cri.go:89] found id: ""
	I1206 10:39:00.679606  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.679613  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:00.679618  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:00.679673  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:00.712252  528268 cri.go:89] found id: ""
	I1206 10:39:00.712266  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.712273  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:00.712278  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:00.712337  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:00.746867  528268 cri.go:89] found id: ""
	I1206 10:39:00.746881  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.746888  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:00.746894  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:00.746954  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:00.779153  528268 cri.go:89] found id: ""
	I1206 10:39:00.779167  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.779174  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:00.779180  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:00.779241  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:00.805143  528268 cri.go:89] found id: ""
	I1206 10:39:00.805157  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.805164  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:00.805170  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:00.805227  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:00.831339  528268 cri.go:89] found id: ""
	I1206 10:39:00.831353  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.831361  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:00.831368  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:00.831430  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:00.857571  528268 cri.go:89] found id: ""
	I1206 10:39:00.857585  528268 logs.go:282] 0 containers: []
	W1206 10:39:00.857593  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:00.857600  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:00.857611  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:00.925179  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:00.917222   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.917610   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.919217   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.919688   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.921308   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:00.917222   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.917610   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.919217   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.919688   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:00.921308   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:00.925189  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:00.925200  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:00.994191  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:00.994210  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:01.029067  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:01.029085  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:01.100689  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:01.100709  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:03.616374  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:03.626603  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:03.626714  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:03.651732  528268 cri.go:89] found id: ""
	I1206 10:39:03.651746  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.651753  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:03.651758  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:03.651818  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:03.679359  528268 cri.go:89] found id: ""
	I1206 10:39:03.679373  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.679380  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:03.679385  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:03.679442  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:03.714610  528268 cri.go:89] found id: ""
	I1206 10:39:03.714624  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.714631  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:03.714636  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:03.714693  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:03.745765  528268 cri.go:89] found id: ""
	I1206 10:39:03.745780  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.745787  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:03.745792  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:03.745849  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:03.771225  528268 cri.go:89] found id: ""
	I1206 10:39:03.771239  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.771247  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:03.771252  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:03.771316  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:03.796796  528268 cri.go:89] found id: ""
	I1206 10:39:03.796853  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.796861  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:03.796867  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:03.796925  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:03.822839  528268 cri.go:89] found id: ""
	I1206 10:39:03.822853  528268 logs.go:282] 0 containers: []
	W1206 10:39:03.822861  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:03.822878  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:03.822888  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:03.858844  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:03.858860  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:03.925683  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:03.925703  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:03.941280  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:03.941297  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:04.009034  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:03.997692   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:03.998374   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.001181   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.001673   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.003993   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:03.997692   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:03.998374   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.001181   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.001673   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.003993   15656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:04.009044  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:04.009055  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:06.582354  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:06.592267  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:06.592340  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:06.617889  528268 cri.go:89] found id: ""
	I1206 10:39:06.617902  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.617909  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:06.617915  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:06.617979  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:06.643951  528268 cri.go:89] found id: ""
	I1206 10:39:06.643966  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.643973  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:06.643978  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:06.644035  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:06.669753  528268 cri.go:89] found id: ""
	I1206 10:39:06.669767  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.669774  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:06.669779  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:06.669839  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:06.701353  528268 cri.go:89] found id: ""
	I1206 10:39:06.701373  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.701380  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:06.701386  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:06.701445  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:06.751930  528268 cri.go:89] found id: ""
	I1206 10:39:06.751944  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.751952  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:06.751956  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:06.752019  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:06.778713  528268 cri.go:89] found id: ""
	I1206 10:39:06.778727  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.778734  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:06.778741  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:06.778802  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:06.804251  528268 cri.go:89] found id: ""
	I1206 10:39:06.804265  528268 logs.go:282] 0 containers: []
	W1206 10:39:06.804273  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:06.804280  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:06.804290  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:06.871350  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:06.871368  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:06.885942  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:06.885960  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:06.959058  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:06.950158   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.951219   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.951835   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.953474   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.954070   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:06.950158   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.951219   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.951835   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.953474   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:06.954070   15747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:06.959068  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:06.959081  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:07.030114  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:07.030135  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:09.559397  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:09.569971  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:09.570039  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:09.595039  528268 cri.go:89] found id: ""
	I1206 10:39:09.595052  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.595059  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:09.595065  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:09.595152  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:09.621113  528268 cri.go:89] found id: ""
	I1206 10:39:09.621127  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.621135  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:09.621140  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:09.621203  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:09.651003  528268 cri.go:89] found id: ""
	I1206 10:39:09.651016  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.651024  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:09.651029  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:09.651087  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:09.677104  528268 cri.go:89] found id: ""
	I1206 10:39:09.677118  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.677125  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:09.677131  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:09.677187  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:09.713565  528268 cri.go:89] found id: ""
	I1206 10:39:09.713579  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.713587  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:09.713592  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:09.713653  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:09.741915  528268 cri.go:89] found id: ""
	I1206 10:39:09.741928  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.741935  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:09.741941  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:09.741997  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:09.774013  528268 cri.go:89] found id: ""
	I1206 10:39:09.774027  528268 logs.go:282] 0 containers: []
	W1206 10:39:09.774035  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:09.774042  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:09.774054  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:09.840091  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:09.840113  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:09.855657  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:09.855675  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:09.919867  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:09.911210   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.911783   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.913473   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.914124   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.915891   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:09.911210   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.911783   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.913473   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.914124   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:09.915891   15854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:09.919877  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:09.919901  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:09.991592  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:09.991613  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:12.526559  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:12.537148  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:12.537208  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:12.570214  528268 cri.go:89] found id: ""
	I1206 10:39:12.570228  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.570235  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:12.570241  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:12.570299  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:12.595309  528268 cri.go:89] found id: ""
	I1206 10:39:12.595324  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.595331  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:12.595342  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:12.595401  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:12.620408  528268 cri.go:89] found id: ""
	I1206 10:39:12.620422  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.620429  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:12.620434  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:12.620495  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:12.645606  528268 cri.go:89] found id: ""
	I1206 10:39:12.645621  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.645628  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:12.645644  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:12.645700  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:12.672105  528268 cri.go:89] found id: ""
	I1206 10:39:12.672119  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.672126  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:12.672132  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:12.672191  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:12.699949  528268 cri.go:89] found id: ""
	I1206 10:39:12.699964  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.699971  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:12.699976  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:12.700038  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:12.730867  528268 cri.go:89] found id: ""
	I1206 10:39:12.730881  528268 logs.go:282] 0 containers: []
	W1206 10:39:12.730888  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:12.730896  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:12.730907  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:12.760666  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:12.760682  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:12.827918  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:12.827939  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:12.845229  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:12.845250  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:12.913571  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:12.905225   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.906413   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.907377   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.908192   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.909739   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:12.905225   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.906413   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.907377   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.908192   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:12.909739   15968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:12.913582  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:12.913606  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:15.486285  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:15.496339  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:15.496397  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:15.522751  528268 cri.go:89] found id: ""
	I1206 10:39:15.522765  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.522773  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:15.522782  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:15.522842  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:15.548733  528268 cri.go:89] found id: ""
	I1206 10:39:15.548747  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.548760  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:15.548765  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:15.548823  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:15.574392  528268 cri.go:89] found id: ""
	I1206 10:39:15.574406  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.574413  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:15.574418  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:15.574475  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:15.600281  528268 cri.go:89] found id: ""
	I1206 10:39:15.600297  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.600311  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:15.600316  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:15.600376  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:15.626469  528268 cri.go:89] found id: ""
	I1206 10:39:15.626482  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.626490  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:15.626496  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:15.626561  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:15.652394  528268 cri.go:89] found id: ""
	I1206 10:39:15.652407  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.652414  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:15.652420  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:15.652477  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:15.679527  528268 cri.go:89] found id: ""
	I1206 10:39:15.679540  528268 logs.go:282] 0 containers: []
	W1206 10:39:15.679553  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:15.679561  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:15.679571  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:15.764342  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:15.764363  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:15.798376  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:15.798394  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:15.868665  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:15.868685  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:15.883983  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:15.883999  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:15.952342  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:15.944348   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.945157   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.946732   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.947077   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.948583   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:15.944348   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.945157   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.946732   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.947077   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.948583   16074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:18.453493  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:18.463876  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:18.463935  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:18.490209  528268 cri.go:89] found id: ""
	I1206 10:39:18.490224  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.490231  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:18.490236  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:18.490294  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:18.516967  528268 cri.go:89] found id: ""
	I1206 10:39:18.516981  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.516988  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:18.516993  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:18.517054  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:18.546169  528268 cri.go:89] found id: ""
	I1206 10:39:18.546182  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.546189  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:18.546194  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:18.546253  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:18.571307  528268 cri.go:89] found id: ""
	I1206 10:39:18.571320  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.571327  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:18.571333  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:18.571391  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:18.596842  528268 cri.go:89] found id: ""
	I1206 10:39:18.596856  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.596863  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:18.596868  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:18.596924  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:18.622545  528268 cri.go:89] found id: ""
	I1206 10:39:18.622559  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.622566  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:18.622571  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:18.622628  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:18.647866  528268 cri.go:89] found id: ""
	I1206 10:39:18.647879  528268 logs.go:282] 0 containers: []
	W1206 10:39:18.647886  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:18.647894  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:18.647904  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:18.722841  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:18.722867  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:18.738489  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:18.738506  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:18.804503  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:18.796653   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.797155   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.798686   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.799110   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.800626   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:18.796653   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.797155   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.798686   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.799110   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.800626   16164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:18.804514  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:18.804527  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:18.873502  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:18.873520  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:21.404064  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:21.414555  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:21.414615  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:21.439357  528268 cri.go:89] found id: ""
	I1206 10:39:21.439371  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.439378  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:21.439384  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:21.439444  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:21.464257  528268 cri.go:89] found id: ""
	I1206 10:39:21.464270  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.464278  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:21.464283  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:21.464342  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:21.489051  528268 cri.go:89] found id: ""
	I1206 10:39:21.489065  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.489072  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:21.489077  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:21.489133  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:21.514898  528268 cri.go:89] found id: ""
	I1206 10:39:21.514912  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.514919  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:21.514930  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:21.514988  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:21.540268  528268 cri.go:89] found id: ""
	I1206 10:39:21.540283  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.540290  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:21.540296  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:21.540361  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:21.564943  528268 cri.go:89] found id: ""
	I1206 10:39:21.564957  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.564965  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:21.564970  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:21.565031  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:21.590819  528268 cri.go:89] found id: ""
	I1206 10:39:21.590833  528268 logs.go:282] 0 containers: []
	W1206 10:39:21.590840  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:21.590848  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:21.590858  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:21.656247  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:21.647267   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.648092   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.649642   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.650214   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.652120   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:21.647267   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.648092   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.649642   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.650214   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.652120   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:21.656258  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:21.656268  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:21.726649  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:21.726669  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:21.757883  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:21.757900  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:21.827592  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:21.827612  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:24.344952  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:24.355567  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:24.355629  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:24.381792  528268 cri.go:89] found id: ""
	I1206 10:39:24.381806  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.381814  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:24.381819  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:24.381880  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:24.406752  528268 cri.go:89] found id: ""
	I1206 10:39:24.406766  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.406773  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:24.406779  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:24.406837  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:24.435444  528268 cri.go:89] found id: ""
	I1206 10:39:24.435458  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.435466  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:24.435471  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:24.435537  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:24.460261  528268 cri.go:89] found id: ""
	I1206 10:39:24.460275  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.460282  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:24.460287  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:24.460344  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:24.485676  528268 cri.go:89] found id: ""
	I1206 10:39:24.485689  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.485697  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:24.485702  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:24.485758  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:24.515674  528268 cri.go:89] found id: ""
	I1206 10:39:24.515689  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.515696  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:24.515702  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:24.515759  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:24.540533  528268 cri.go:89] found id: ""
	I1206 10:39:24.540547  528268 logs.go:282] 0 containers: []
	W1206 10:39:24.540555  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:24.540563  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:24.540573  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:24.607514  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:24.607536  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:24.622495  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:24.622512  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:24.688734  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:24.679787   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.680616   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.681733   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.682450   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.684164   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:24.679787   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.680616   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.681733   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.682450   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.684164   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:24.688745  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:24.688755  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:24.767851  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:24.767871  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:27.298384  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:27.308520  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:27.308577  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:27.337406  528268 cri.go:89] found id: ""
	I1206 10:39:27.337421  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.337429  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:27.337434  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:27.337492  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:27.363616  528268 cri.go:89] found id: ""
	I1206 10:39:27.363630  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.363637  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:27.363643  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:27.363700  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:27.387807  528268 cri.go:89] found id: ""
	I1206 10:39:27.387821  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.387828  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:27.387833  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:27.387892  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:27.417047  528268 cri.go:89] found id: ""
	I1206 10:39:27.417061  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.417068  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:27.417076  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:27.417135  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:27.443034  528268 cri.go:89] found id: ""
	I1206 10:39:27.443047  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.443055  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:27.443060  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:27.443156  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:27.469276  528268 cri.go:89] found id: ""
	I1206 10:39:27.469289  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.469297  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:27.469302  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:27.469361  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:27.494605  528268 cri.go:89] found id: ""
	I1206 10:39:27.494619  528268 logs.go:282] 0 containers: []
	W1206 10:39:27.494626  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:27.494634  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:27.494681  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:27.522899  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:27.522916  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:27.593447  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:27.593467  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:27.608920  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:27.608937  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:27.673774  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:27.665376   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.666067   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.667656   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.668260   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.669814   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:27.665376   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.666067   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.667656   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.668260   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.669814   16481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:27.673784  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:27.673795  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:30.246836  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:30.257118  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:30.257181  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:30.285905  528268 cri.go:89] found id: ""
	I1206 10:39:30.285918  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.285926  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:30.285931  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:30.285991  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:30.312233  528268 cri.go:89] found id: ""
	I1206 10:39:30.312247  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.312254  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:30.312259  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:30.312320  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:30.342032  528268 cri.go:89] found id: ""
	I1206 10:39:30.342047  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.342061  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:30.342066  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:30.342127  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:30.371021  528268 cri.go:89] found id: ""
	I1206 10:39:30.371051  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.371059  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:30.371064  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:30.371145  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:30.397540  528268 cri.go:89] found id: ""
	I1206 10:39:30.397554  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.397561  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:30.397566  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:30.397625  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:30.424004  528268 cri.go:89] found id: ""
	I1206 10:39:30.424018  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.424026  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:30.424033  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:30.424090  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:30.450313  528268 cri.go:89] found id: ""
	I1206 10:39:30.450327  528268 logs.go:282] 0 containers: []
	W1206 10:39:30.450335  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:30.450342  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:30.450352  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:30.516474  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:30.516493  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:30.532143  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:30.532160  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:30.595585  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:30.587952   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.588400   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.589883   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.590195   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.591620   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:30.587952   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.588400   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.589883   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.590195   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.591620   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:30.595595  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:30.595606  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:30.664167  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:30.664186  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:33.200924  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:33.211672  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:33.211735  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:33.237137  528268 cri.go:89] found id: ""
	I1206 10:39:33.237151  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.237159  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:33.237165  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:33.237265  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:33.263318  528268 cri.go:89] found id: ""
	I1206 10:39:33.263332  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.263339  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:33.263345  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:33.263403  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:33.292810  528268 cri.go:89] found id: ""
	I1206 10:39:33.292824  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.292832  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:33.292837  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:33.292902  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:33.322280  528268 cri.go:89] found id: ""
	I1206 10:39:33.322294  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.322302  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:33.322307  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:33.322371  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:33.347371  528268 cri.go:89] found id: ""
	I1206 10:39:33.347384  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.347391  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:33.347397  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:33.347454  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:33.373452  528268 cri.go:89] found id: ""
	I1206 10:39:33.373465  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.373473  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:33.373478  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:33.373536  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:33.398875  528268 cri.go:89] found id: ""
	I1206 10:39:33.398895  528268 logs.go:282] 0 containers: []
	W1206 10:39:33.398902  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:33.398910  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:33.398921  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:33.465783  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:33.465803  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:33.480960  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:33.480977  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:33.548139  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:33.539389   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.540163   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.541972   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.542561   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.544286   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:33.539389   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.540163   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.541972   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.542561   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.544286   16685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:33.548148  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:33.548158  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:33.617390  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:33.617412  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:36.152703  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:36.162988  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:36.163052  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:36.188586  528268 cri.go:89] found id: ""
	I1206 10:39:36.188599  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.188607  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:36.188611  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:36.188670  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:36.213361  528268 cri.go:89] found id: ""
	I1206 10:39:36.213374  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.213383  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:36.213388  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:36.213445  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:36.239271  528268 cri.go:89] found id: ""
	I1206 10:39:36.239285  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.239292  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:36.239297  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:36.239357  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:36.265679  528268 cri.go:89] found id: ""
	I1206 10:39:36.265695  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.265702  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:36.265707  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:36.265766  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:36.295654  528268 cri.go:89] found id: ""
	I1206 10:39:36.295668  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.295675  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:36.295681  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:36.295739  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:36.323853  528268 cri.go:89] found id: ""
	I1206 10:39:36.323874  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.323881  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:36.323887  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:36.323950  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:36.348624  528268 cri.go:89] found id: ""
	I1206 10:39:36.348639  528268 logs.go:282] 0 containers: []
	W1206 10:39:36.348646  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:36.348654  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:36.348665  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:36.363245  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:36.363261  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:36.427550  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:36.419105   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.419825   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.421548   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.422073   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.423577   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:36.419105   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.419825   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.421548   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.422073   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.423577   16788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:36.427562  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:36.427573  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:36.495925  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:36.495943  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:36.524935  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:36.524952  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:39.092735  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:39.102812  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:39.102870  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:39.129292  528268 cri.go:89] found id: ""
	I1206 10:39:39.129306  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.129313  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:39.129318  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:39.129374  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:39.158470  528268 cri.go:89] found id: ""
	I1206 10:39:39.158484  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.158491  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:39.158496  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:39.158555  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:39.184281  528268 cri.go:89] found id: ""
	I1206 10:39:39.184295  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.184303  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:39.184308  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:39.184371  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:39.213800  528268 cri.go:89] found id: ""
	I1206 10:39:39.213813  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.213820  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:39.213825  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:39.213879  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:39.239313  528268 cri.go:89] found id: ""
	I1206 10:39:39.239327  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.239334  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:39.239339  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:39.239399  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:39.266416  528268 cri.go:89] found id: ""
	I1206 10:39:39.266429  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.266436  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:39.266442  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:39.266497  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:39.291512  528268 cri.go:89] found id: ""
	I1206 10:39:39.291526  528268 logs.go:282] 0 containers: []
	W1206 10:39:39.291533  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:39.291541  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:39.291552  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:39.357396  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:39.357414  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:39.372532  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:39.372549  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:39.435924  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:39.427398   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.428323   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.429997   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.430495   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.432094   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:39.427398   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.428323   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.429997   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.430495   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.432094   16897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:39.435935  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:39.435946  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:39.504162  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:39.504182  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:42.034738  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:42.045722  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:42.045786  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:42.075972  528268 cri.go:89] found id: ""
	I1206 10:39:42.075988  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.075998  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:42.076004  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:42.076071  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:42.111989  528268 cri.go:89] found id: ""
	I1206 10:39:42.112018  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.112042  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:42.112048  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:42.112124  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:42.147538  528268 cri.go:89] found id: ""
	I1206 10:39:42.147562  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.147571  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:42.147577  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:42.147654  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:42.177982  528268 cri.go:89] found id: ""
	I1206 10:39:42.177999  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.178009  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:42.178016  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:42.178090  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:42.209844  528268 cri.go:89] found id: ""
	I1206 10:39:42.209860  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.209868  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:42.209874  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:42.209966  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:42.266057  528268 cri.go:89] found id: ""
	I1206 10:39:42.266071  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.266079  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:42.266085  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:42.266153  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:42.298140  528268 cri.go:89] found id: ""
	I1206 10:39:42.298154  528268 logs.go:282] 0 containers: []
	W1206 10:39:42.298162  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:42.298184  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:42.298197  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:42.330034  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:42.330051  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:42.396938  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:42.396958  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:42.412056  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:42.412077  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:42.481304  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:42.470939   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.471731   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.473286   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.475758   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.476402   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:42.470939   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.471731   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.473286   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.475758   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.476402   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:42.481314  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:42.481326  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:45.054765  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:45.080943  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:45.081023  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:45.141872  528268 cri.go:89] found id: ""
	I1206 10:39:45.141889  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.141898  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:45.141904  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:45.141970  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:45.187818  528268 cri.go:89] found id: ""
	I1206 10:39:45.187838  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.187846  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:45.187854  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:45.187928  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:45.231785  528268 cri.go:89] found id: ""
	I1206 10:39:45.231815  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.231846  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:45.231853  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:45.232001  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:45.271976  528268 cri.go:89] found id: ""
	I1206 10:39:45.272000  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.272007  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:45.272020  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:45.272144  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:45.309755  528268 cri.go:89] found id: ""
	I1206 10:39:45.309770  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.309778  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:45.309784  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:45.309859  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:45.337077  528268 cri.go:89] found id: ""
	I1206 10:39:45.337091  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.337098  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:45.337104  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:45.337161  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:45.363255  528268 cri.go:89] found id: ""
	I1206 10:39:45.363269  528268 logs.go:282] 0 containers: []
	W1206 10:39:45.363277  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:45.363285  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:45.363295  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:45.430326  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:45.430345  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:45.445222  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:45.445239  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:45.514305  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:45.503694   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.504527   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.507399   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.508008   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.509816   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:45.503694   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.504527   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.507399   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.508008   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.509816   17104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:45.514315  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:45.514351  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:45.586673  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:45.586702  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:48.117880  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:48.128191  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:48.128261  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:48.153898  528268 cri.go:89] found id: ""
	I1206 10:39:48.153912  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.153919  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:48.153924  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:48.153986  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:48.179947  528268 cri.go:89] found id: ""
	I1206 10:39:48.179960  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.179968  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:48.179973  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:48.180032  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:48.206970  528268 cri.go:89] found id: ""
	I1206 10:39:48.206984  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.206992  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:48.206997  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:48.207056  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:48.232490  528268 cri.go:89] found id: ""
	I1206 10:39:48.232504  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.232511  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:48.232516  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:48.232574  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:48.261888  528268 cri.go:89] found id: ""
	I1206 10:39:48.261902  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.261909  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:48.261915  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:48.261970  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:48.287239  528268 cri.go:89] found id: ""
	I1206 10:39:48.287259  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.287266  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:48.287271  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:48.287327  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:48.312701  528268 cri.go:89] found id: ""
	I1206 10:39:48.312716  528268 logs.go:282] 0 containers: []
	W1206 10:39:48.312723  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:48.312730  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:48.312741  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:48.379854  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:48.379873  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:48.395027  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:48.395043  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:48.467966  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:48.459014   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.459732   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.460649   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.462199   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.462576   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:48.459014   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.459732   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.460649   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.462199   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.462576   17210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:48.467977  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:48.467999  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:48.537326  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:48.537347  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:51.077353  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:51.088357  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:51.088422  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:51.113964  528268 cri.go:89] found id: ""
	I1206 10:39:51.113978  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.113986  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:51.113991  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:39:51.114048  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:51.141966  528268 cri.go:89] found id: ""
	I1206 10:39:51.141981  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.141989  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:39:51.141994  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:39:51.142065  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:51.170585  528268 cri.go:89] found id: ""
	I1206 10:39:51.170599  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.170607  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:39:51.170612  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:51.170670  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:51.196958  528268 cri.go:89] found id: ""
	I1206 10:39:51.196972  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.196980  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:51.196985  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:51.197045  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:51.222240  528268 cri.go:89] found id: ""
	I1206 10:39:51.222255  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.222262  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:51.222267  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:51.222328  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:51.248023  528268 cri.go:89] found id: ""
	I1206 10:39:51.248038  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.248045  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:51.248051  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:51.248110  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:51.276094  528268 cri.go:89] found id: ""
	I1206 10:39:51.276108  528268 logs.go:282] 0 containers: []
	W1206 10:39:51.276115  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:51.276122  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:51.276132  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:51.342420  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:51.342443  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:51.357018  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:51.357034  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:51.423986  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:51.415814   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.416564   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.418096   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.418402   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.419900   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:51.415814   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.416564   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.418096   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.418402   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.419900   17316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:51.423996  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:39:51.424007  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:39:51.493620  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:39:51.493640  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:54.023829  528268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:54.034889  528268 kubeadm.go:602] duration metric: took 4m2.326619845s to restartPrimaryControlPlane
	W1206 10:39:54.034955  528268 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 10:39:54.035078  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:39:54.453084  528268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:39:54.466906  528268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:39:54.474624  528268 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:39:54.474678  528268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:39:54.482552  528268 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:39:54.482562  528268 kubeadm.go:158] found existing configuration files:
	
	I1206 10:39:54.482612  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:39:54.490238  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:39:54.490301  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:39:54.497760  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:39:54.505776  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:39:54.505840  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:39:54.513397  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:39:54.521456  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:39:54.521517  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:39:54.529274  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:39:54.537105  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:39:54.537161  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:39:54.544719  528268 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:39:54.584997  528268 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:39:54.585045  528268 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:39:54.652750  528268 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:39:54.652815  528268 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:39:54.652850  528268 kubeadm.go:319] OS: Linux
	I1206 10:39:54.652893  528268 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:39:54.652940  528268 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:39:54.652986  528268 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:39:54.653033  528268 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:39:54.653079  528268 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:39:54.653126  528268 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:39:54.653171  528268 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:39:54.653217  528268 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:39:54.653262  528268 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:39:54.728791  528268 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:39:54.728901  528268 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:39:54.729018  528268 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:39:54.737647  528268 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:39:54.741159  528268 out.go:252]   - Generating certificates and keys ...
	I1206 10:39:54.741265  528268 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:39:54.741337  528268 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:39:54.741433  528268 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:39:54.741505  528268 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:39:54.741585  528268 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:39:54.741651  528268 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:39:54.741743  528268 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:39:54.741813  528268 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:39:54.741895  528268 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:39:54.741991  528268 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:39:54.742045  528268 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:39:54.742113  528268 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:39:55.375743  528268 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:39:55.444664  528268 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:39:55.561708  528268 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:39:55.802678  528268 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:39:55.992428  528268 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:39:55.993134  528268 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:39:55.995941  528268 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:39:55.999335  528268 out.go:252]   - Booting up control plane ...
	I1206 10:39:55.999434  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:39:55.999507  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:39:55.999569  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:39:56.016567  528268 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:39:56.016688  528268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:39:56.025029  528268 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:39:56.025345  528268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:39:56.025411  528268 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:39:56.167783  528268 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:39:56.167896  528268 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:43:56.165890  528268 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000163749s
	I1206 10:43:56.165916  528268 kubeadm.go:319] 
	I1206 10:43:56.165973  528268 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:43:56.166007  528268 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:43:56.166124  528268 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:43:56.166130  528268 kubeadm.go:319] 
	I1206 10:43:56.166237  528268 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:43:56.166298  528268 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:43:56.166345  528268 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:43:56.166349  528268 kubeadm.go:319] 
	I1206 10:43:56.171451  528268 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:43:56.171899  528268 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:43:56.172014  528268 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:43:56.172288  528268 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1206 10:43:56.172293  528268 kubeadm.go:319] 
	I1206 10:43:56.172374  528268 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:43:56.172501  528268 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000163749s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:43:56.172597  528268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:43:56.619462  528268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:43:56.633229  528268 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:43:56.633287  528268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:43:56.641609  528268 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:43:56.641619  528268 kubeadm.go:158] found existing configuration files:
	
	I1206 10:43:56.641669  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:43:56.649494  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:43:56.649548  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:43:56.657009  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:43:56.665153  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:43:56.665204  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:43:56.672965  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:43:56.681003  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:43:56.681063  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:43:56.688721  528268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:43:56.696901  528268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:43:56.696963  528268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:43:56.704620  528268 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:43:56.745749  528268 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:43:56.745826  528268 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:43:56.814552  528268 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:43:56.814625  528268 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:43:56.814668  528268 kubeadm.go:319] OS: Linux
	I1206 10:43:56.814710  528268 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:43:56.814764  528268 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:43:56.814817  528268 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:43:56.814861  528268 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:43:56.814913  528268 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:43:56.814977  528268 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:43:56.815030  528268 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:43:56.815078  528268 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:43:56.815150  528268 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:43:56.882919  528268 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:43:56.883028  528268 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:43:56.883177  528268 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:43:56.891776  528268 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:43:56.897133  528268 out.go:252]   - Generating certificates and keys ...
	I1206 10:43:56.897243  528268 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:43:56.897331  528268 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:43:56.897418  528268 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:43:56.897483  528268 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:43:56.897556  528268 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:43:56.897613  528268 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:43:56.897679  528268 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:43:56.897743  528268 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:43:56.897822  528268 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:43:56.897898  528268 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:43:56.897938  528268 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:43:56.897997  528268 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:43:57.103756  528268 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:43:57.598666  528268 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:43:58.161834  528268 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:43:58.402161  528268 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:43:58.630471  528268 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:43:58.631113  528268 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:43:58.634023  528268 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:43:58.637198  528268 out.go:252]   - Booting up control plane ...
	I1206 10:43:58.637294  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:43:58.637640  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:43:58.639086  528268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:43:58.654264  528268 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:43:58.654366  528268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:43:58.662722  528268 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:43:58.663439  528268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:43:58.663774  528268 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:43:58.799365  528268 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:43:58.799473  528268 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:47:58.799403  528268 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000249913s
	I1206 10:47:58.799433  528268 kubeadm.go:319] 
	I1206 10:47:58.799491  528268 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:47:58.799521  528268 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:47:58.799619  528268 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:47:58.799623  528268 kubeadm.go:319] 
	I1206 10:47:58.799720  528268 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:47:58.799749  528268 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:47:58.799777  528268 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:47:58.799780  528268 kubeadm.go:319] 
	I1206 10:47:58.803822  528268 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:47:58.804249  528268 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:47:58.804357  528268 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:47:58.804590  528268 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:47:58.804595  528268 kubeadm.go:319] 
	I1206 10:47:58.804663  528268 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:47:58.804715  528268 kubeadm.go:403] duration metric: took 12m7.139257328s to StartCluster
	I1206 10:47:58.804746  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:47:58.804808  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:47:58.833842  528268 cri.go:89] found id: ""
	I1206 10:47:58.833855  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.833863  528268 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:47:58.833869  528268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:47:58.833925  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:47:58.859642  528268 cri.go:89] found id: ""
	I1206 10:47:58.859656  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.859663  528268 logs.go:284] No container was found matching "etcd"
	I1206 10:47:58.859668  528268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:47:58.859731  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:47:58.888835  528268 cri.go:89] found id: ""
	I1206 10:47:58.888850  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.888857  528268 logs.go:284] No container was found matching "coredns"
	I1206 10:47:58.888863  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:47:58.888920  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:47:58.913692  528268 cri.go:89] found id: ""
	I1206 10:47:58.913706  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.913713  528268 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:47:58.913718  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:47:58.913775  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:47:58.941639  528268 cri.go:89] found id: ""
	I1206 10:47:58.941653  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.941660  528268 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:47:58.941671  528268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:47:58.941728  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:47:58.968219  528268 cri.go:89] found id: ""
	I1206 10:47:58.968240  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.968249  528268 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:47:58.968254  528268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:47:58.968312  528268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:47:58.993376  528268 cri.go:89] found id: ""
	I1206 10:47:58.993390  528268 logs.go:282] 0 containers: []
	W1206 10:47:58.993397  528268 logs.go:284] No container was found matching "kindnet"
	I1206 10:47:58.993405  528268 logs.go:123] Gathering logs for kubelet ...
	I1206 10:47:58.993415  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:47:59.059491  528268 logs.go:123] Gathering logs for dmesg ...
	I1206 10:47:59.059510  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:47:59.075692  528268 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:47:59.075708  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:47:59.140902  528268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:47:59.133228   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.133791   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.135323   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.135733   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.137154   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:47:59.133228   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.133791   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.135323   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.135733   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:47:59.137154   21099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:47:59.140911  528268 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:47:59.140922  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:47:59.218521  528268 logs.go:123] Gathering logs for container status ...
	I1206 10:47:59.218539  528268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:47:59.255468  528268 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000249913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:47:59.255514  528268 out.go:285] * 
	W1206 10:47:59.255766  528268 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000249913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:47:59.255841  528268 out.go:285] * 
	W1206 10:47:59.258456  528268 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:47:59.265427  528268 out.go:203] 
	W1206 10:47:59.268413  528268 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000249913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:47:59.268473  528268 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:47:59.268491  528268 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:47:59.271584  528268 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:35:50 functional-123579 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.732761675Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=c00b0212-e336-4d22-92e1-7d2bc5879a6e name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.733702159Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=9f684ee3-1cff-44ee-b48c-175c742cbd8a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.734357315Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=b1ddac76-5aa4-4140-b7f7-c9eed400c171 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.734837772Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=7e125323-ff3c-4e31-b0b9-3d9689de3e58 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.735631552Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=67dc2959-1f35-4122-97f6-07949ee5c60d name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.7361477Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=78397495-3170-4295-8073-cc8bd3750cff name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:39:54 functional-123579 crio[9949]: time="2025-12-06T10:39:54.736754759Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=b77669c2-3fed-4601-ace3-1a76e50882f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.886838849Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=e2aa5af4-3e0c-4a29-a9b0-9e59e8da3ea3 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.888149098Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=2232845f-2ab4-48d6-ac34-944fdebda910 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.888749905Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=c67da188-42dd-470b-ae77-cf546f5b22af name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.889342319Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=7b189f38-b046-468f-93d2-aafc2f683ea0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.889870274Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=cce0b971-d053-408a-aced-c9bdb56d4198 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.890356696Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=2133806a-9696-4cef-a9b9-9f8ae49bcb1a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:43:56 functional-123579 crio[9949]: time="2025-12-06T10:43:56.890769463Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=4197f4de-a4d5-47d7-aee8-909523db8ff4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.510413066Z" level=info msg="Checking image status: kicbase/echo-server:functional-123579" id=03972bc3-b343-408f-b3f2-79f8c749bdd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.510587528Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.510631539Z" level=info msg="Image kicbase/echo-server:functional-123579 not found" id=03972bc3-b343-408f-b3f2-79f8c749bdd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.510692789Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-123579 found" id=03972bc3-b343-408f-b3f2-79f8c749bdd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.542613043Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-123579" id=58dbc605-d105-4be4-b25a-21c2b48f56f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.54278168Z" level=info msg="Image docker.io/kicbase/echo-server:functional-123579 not found" id=58dbc605-d105-4be4-b25a-21c2b48f56f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.542832714Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-123579 found" id=58dbc605-d105-4be4-b25a-21c2b48f56f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.568965528Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-123579" id=0d06a5de-c1f5-4ecd-8470-3e3f2af12cd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.569093041Z" level=info msg="Image localhost/kicbase/echo-server:functional-123579 not found" id=0d06a5de-c1f5-4ecd-8470-3e3f2af12cd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:48:08 functional-123579 crio[9949]: time="2025-12-06T10:48:08.569130307Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-123579 found" id=0d06a5de-c1f5-4ecd-8470-3e3f2af12cd1 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:48:09.726562   21872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:48:09.731056   21872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:48:09.731694   21872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:48:09.733570   21872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:48:09.734112   21872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:20] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000983] FS-Cache: O-cookie d=000000005fa08aa9{9P.session} n=00000000effdd306
	[  +0.001108] FS-Cache: O-key=[10] '34323935383339353739'
	[  +0.000774] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001064] FS-Cache: N-cookie d=000000005fa08aa9{9P.session} n=00000000d1a54e80
	[  +0.001158] FS-Cache: N-key=[10] '34323935383339353739'
	[Dec 6 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.091742] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:17] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:18] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:48:09 up  3:30,  0 user,  load average: 0.58, 0.26, 0.47
	Linux functional-123579 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:48:07 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:48:08 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2140.
	Dec 06 10:48:08 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:48:08 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:48:08 functional-123579 kubelet[21690]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:48:08 functional-123579 kubelet[21690]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:48:08 functional-123579 kubelet[21690]: E1206 10:48:08.258033   21690 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:48:08 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:48:08 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:48:08 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2141.
	Dec 06 10:48:08 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:48:08 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:48:09 functional-123579 kubelet[21775]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:48:09 functional-123579 kubelet[21775]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:48:09 functional-123579 kubelet[21775]: E1206 10:48:09.013381   21775 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:48:09 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:48:09 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:48:09 functional-123579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2142.
	Dec 06 10:48:09 functional-123579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:48:09 functional-123579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:48:09 functional-123579 kubelet[21877]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:48:09 functional-123579 kubelet[21877]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:48:09 functional-123579 kubelet[21877]: E1206 10:48:09.795285   21877 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:48:09 functional-123579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:48:09 functional-123579 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-123579 -n functional-123579: exit status 2 (442.80965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-123579" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (2.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-123579 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-123579 create deployment hello-node --image kicbase/echo-server: exit status 1 (106.042201ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-123579 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-123579 service list: exit status 103 (320.696221ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-123579 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-123579"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-123579 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-123579 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-123579\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-123579 service list -o json: exit status 103 (325.2148ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-123579 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-123579"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-123579 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-123579 service --namespace=default --https --url hello-node: exit status 103 (335.575801ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-123579 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-123579"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-123579 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-123579 service hello-node --url --format={{.IP}}: exit status 103 (310.540868ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-123579 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-123579"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-123579 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-123579 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-123579\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-123579 service hello-node --url: exit status 103 (392.617396ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-123579 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-123579"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-123579 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-123579 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-123579"
functional_test.go:1579: failed to parse "* The control-plane node functional-123579 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-123579\"": parse "* The control-plane node functional-123579 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-123579\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-123579 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-123579 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1206 10:48:15.117462  543276 out.go:360] Setting OutFile to fd 1 ...
I1206 10:48:15.117723  543276 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:48:15.117837  543276 out.go:374] Setting ErrFile to fd 2...
I1206 10:48:15.117878  543276 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:48:15.123299  543276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
I1206 10:48:15.123781  543276 mustload.go:66] Loading cluster: functional-123579
I1206 10:48:15.124438  543276 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 10:48:15.125116  543276 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
I1206 10:48:15.154924  543276 host.go:66] Checking if "functional-123579" exists ...
I1206 10:48:15.155325  543276 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1206 10:48:15.309521  543276 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:48:15.285665257 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1206 10:48:15.309725  543276 api_server.go:166] Checking apiserver status ...
I1206 10:48:15.309796  543276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1206 10:48:15.309847  543276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
I1206 10:48:15.349721  543276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
W1206 10:48:15.469893  543276 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1206 10:48:15.472946  543276 out.go:179] * The control-plane node functional-123579 apiserver is not running: (state=Stopped)
I1206 10:48:15.475730  543276 out.go:179]   To start a cluster, run: "minikube start -p functional-123579"

                                                
                                                
stdout: * The control-plane node functional-123579 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-123579"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-123579 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 543277: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-123579 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-123579 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-123579 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-123579 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-123579 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-123579 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-123579 apply -f testdata/testsvc.yaml: exit status 1 (181.87067ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-123579 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (106.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.104.109.146": Temporary Error: Get "http://10.104.109.146": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-123579 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-123579 get svc nginx-svc: exit status 1 (61.006122ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-123579 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (106.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3981227179/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765018209109782394" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3981227179/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765018209109782394" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3981227179/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765018209109782394" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3981227179/001/test-1765018209109782394
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-123579 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (332.207853ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 10:50:09.442320  488068 retry.go:31] will retry after 444.397896ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 10:50 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 10:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 10:50 test-1765018209109782394
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh cat /mount-9p/test-1765018209109782394
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-123579 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-123579 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (57.694162ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-123579 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-123579 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (285.926064ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=37743)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec  6 10:50 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec  6 10:50 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec  6 10:50 test-1765018209109782394
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-123579 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3981227179/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3981227179/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3981227179/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:37743
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3981227179/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3981227179/001:/mount-9p --alsologtostderr -v=1] stderr:
I1206 10:50:09.183821  545677 out.go:360] Setting OutFile to fd 1 ...
I1206 10:50:09.184027  545677 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:50:09.184041  545677 out.go:374] Setting ErrFile to fd 2...
I1206 10:50:09.184047  545677 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:50:09.184363  545677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
I1206 10:50:09.184686  545677 mustload.go:66] Loading cluster: functional-123579
I1206 10:50:09.185125  545677 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 10:50:09.185774  545677 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
I1206 10:50:09.205110  545677 host.go:66] Checking if "functional-123579" exists ...
I1206 10:50:09.205445  545677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1206 10:50:09.296215  545677 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:50:09.284213438 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1206 10:50:09.296374  545677 cli_runner.go:164] Run: docker network inspect functional-123579 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1206 10:50:09.323141  545677 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3981227179/001 into VM as /mount-9p ...
I1206 10:50:09.326121  545677 out.go:179]   - Mount type:   9p
I1206 10:50:09.329154  545677 out.go:179]   - User ID:      docker
I1206 10:50:09.332082  545677 out.go:179]   - Group ID:     docker
I1206 10:50:09.334955  545677 out.go:179]   - Version:      9p2000.L
I1206 10:50:09.337892  545677 out.go:179]   - Message Size: 262144
I1206 10:50:09.340795  545677 out.go:179]   - Options:      map[]
I1206 10:50:09.343775  545677 out.go:179]   - Bind Address: 192.168.49.1:37743
I1206 10:50:09.346569  545677 out.go:179] * Userspace file server: 
I1206 10:50:09.346813  545677 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1206 10:50:09.346902  545677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
I1206 10:50:09.365298  545677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
I1206 10:50:09.474077  545677 mount.go:180] unmount for /mount-9p ran successfully
I1206 10:50:09.474106  545677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1206 10:50:09.482891  545677 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=37743,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1206 10:50:09.494130  545677 main.go:127] stdlog: ufs.go:141 connected
I1206 10:50:09.494350  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tversion tag 65535 msize 262144 version '9P2000.L'
I1206 10:50:09.494400  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rversion tag 65535 msize 262144 version '9P2000'
I1206 10:50:09.494690  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1206 10:50:09.494762  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rattach tag 0 aqid (c9d241 f3488b50 'd')
I1206 10:50:09.495676  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tstat tag 0 fid 0
I1206 10:50:09.495810  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d241 f3488b50 'd') m d775 at 0 mt 1765018209 l 4096 t 0 d 0 ext )
I1206 10:50:09.497935  545677 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/.mount-process: {Name:mk55ca06911ade27f2723f8b2a12a53ae692896f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 10:50:09.498161  545677 mount.go:105] mount successful: ""
I1206 10:50:09.501714  545677 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3981227179/001 to /mount-9p
I1206 10:50:09.504482  545677 out.go:203] 
I1206 10:50:09.507427  545677 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1206 10:50:10.456143  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tstat tag 0 fid 0
I1206 10:50:10.456221  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d241 f3488b50 'd') m d775 at 0 mt 1765018209 l 4096 t 0 d 0 ext )
I1206 10:50:10.456851  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Twalk tag 0 fid 0 newfid 1 
I1206 10:50:10.457054  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rwalk tag 0 
I1206 10:50:10.461506  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Topen tag 0 fid 1 mode 0
I1206 10:50:10.461615  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Ropen tag 0 qid (c9d241 f3488b50 'd') iounit 0
I1206 10:50:10.467236  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tstat tag 0 fid 0
I1206 10:50:10.467363  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d241 f3488b50 'd') m d775 at 0 mt 1765018209 l 4096 t 0 d 0 ext )
I1206 10:50:10.467514  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tread tag 0 fid 1 offset 0 count 262120
I1206 10:50:10.467659  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rread tag 0 count 258
I1206 10:50:10.467761  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tread tag 0 fid 1 offset 258 count 261862
I1206 10:50:10.467791  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rread tag 0 count 0
I1206 10:50:10.467881  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tread tag 0 fid 1 offset 258 count 262120
I1206 10:50:10.467907  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rread tag 0 count 0
I1206 10:50:10.468011  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1206 10:50:10.468058  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rwalk tag 0 (c9d242 f3488b50 '') 
I1206 10:50:10.468167  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tstat tag 0 fid 2
I1206 10:50:10.468206  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d242 f3488b50 '') m 644 at 0 mt 1765018209 l 24 t 0 d 0 ext )
I1206 10:50:10.468307  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tstat tag 0 fid 2
I1206 10:50:10.468341  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d242 f3488b50 '') m 644 at 0 mt 1765018209 l 24 t 0 d 0 ext )
I1206 10:50:10.468433  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tclunk tag 0 fid 2
I1206 10:50:10.468464  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rclunk tag 0
I1206 10:50:10.468565  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Twalk tag 0 fid 0 newfid 2 0:'test-1765018209109782394' 
I1206 10:50:10.468601  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rwalk tag 0 (c9d244 f3488b50 '') 
I1206 10:50:10.468685  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tstat tag 0 fid 2
I1206 10:50:10.468717  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rstat tag 0 st ('test-1765018209109782394' 'jenkins' 'jenkins' '' q (c9d244 f3488b50 '') m 644 at 0 mt 1765018209 l 24 t 0 d 0 ext )
I1206 10:50:10.468816  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tstat tag 0 fid 2
I1206 10:50:10.468848  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rstat tag 0 st ('test-1765018209109782394' 'jenkins' 'jenkins' '' q (c9d244 f3488b50 '') m 644 at 0 mt 1765018209 l 24 t 0 d 0 ext )
I1206 10:50:10.468938  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tclunk tag 0 fid 2
I1206 10:50:10.468960  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rclunk tag 0
I1206 10:50:10.469060  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1206 10:50:10.469108  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rwalk tag 0 (c9d243 f3488b50 '') 
I1206 10:50:10.469206  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tstat tag 0 fid 2
I1206 10:50:10.469237  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d243 f3488b50 '') m 644 at 0 mt 1765018209 l 24 t 0 d 0 ext )
I1206 10:50:10.469331  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tstat tag 0 fid 2
I1206 10:50:10.469366  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d243 f3488b50 '') m 644 at 0 mt 1765018209 l 24 t 0 d 0 ext )
I1206 10:50:10.469456  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tclunk tag 0 fid 2
I1206 10:50:10.469478  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rclunk tag 0
I1206 10:50:10.469569  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tread tag 0 fid 1 offset 258 count 262120
I1206 10:50:10.469599  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rread tag 0 count 0
I1206 10:50:10.469708  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tclunk tag 0 fid 1
I1206 10:50:10.469739  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rclunk tag 0
I1206 10:50:10.747294  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Twalk tag 0 fid 0 newfid 1 0:'test-1765018209109782394' 
I1206 10:50:10.747406  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rwalk tag 0 (c9d244 f3488b50 '') 
I1206 10:50:10.747578  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tstat tag 0 fid 1
I1206 10:50:10.747630  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rstat tag 0 st ('test-1765018209109782394' 'jenkins' 'jenkins' '' q (c9d244 f3488b50 '') m 644 at 0 mt 1765018209 l 24 t 0 d 0 ext )
I1206 10:50:10.747788  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Twalk tag 0 fid 1 newfid 2 
I1206 10:50:10.747820  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rwalk tag 0 
I1206 10:50:10.747934  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Topen tag 0 fid 2 mode 0
I1206 10:50:10.747982  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Ropen tag 0 qid (c9d244 f3488b50 '') iounit 0
I1206 10:50:10.748125  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tstat tag 0 fid 1
I1206 10:50:10.748166  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rstat tag 0 st ('test-1765018209109782394' 'jenkins' 'jenkins' '' q (c9d244 f3488b50 '') m 644 at 0 mt 1765018209 l 24 t 0 d 0 ext )
I1206 10:50:10.748313  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tread tag 0 fid 2 offset 0 count 262120
I1206 10:50:10.748355  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rread tag 0 count 24
I1206 10:50:10.748496  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tread tag 0 fid 2 offset 24 count 262120
I1206 10:50:10.748525  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rread tag 0 count 0
I1206 10:50:10.748649  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tread tag 0 fid 2 offset 24 count 262120
I1206 10:50:10.748681  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rread tag 0 count 0
I1206 10:50:10.748841  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tclunk tag 0 fid 2
I1206 10:50:10.748880  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rclunk tag 0
I1206 10:50:10.749092  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tclunk tag 0 fid 1
I1206 10:50:10.749124  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rclunk tag 0
I1206 10:50:11.095345  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tstat tag 0 fid 0
I1206 10:50:11.095424  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d241 f3488b50 'd') m d775 at 0 mt 1765018209 l 4096 t 0 d 0 ext )
I1206 10:50:11.095799  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Twalk tag 0 fid 0 newfid 1 
I1206 10:50:11.095842  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rwalk tag 0 
I1206 10:50:11.095978  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Topen tag 0 fid 1 mode 0
I1206 10:50:11.096049  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Ropen tag 0 qid (c9d241 f3488b50 'd') iounit 0
I1206 10:50:11.096203  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tstat tag 0 fid 0
I1206 10:50:11.096242  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d241 f3488b50 'd') m d775 at 0 mt 1765018209 l 4096 t 0 d 0 ext )
I1206 10:50:11.096389  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tread tag 0 fid 1 offset 0 count 262120
I1206 10:50:11.096495  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rread tag 0 count 258
I1206 10:50:11.096636  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tread tag 0 fid 1 offset 258 count 261862
I1206 10:50:11.096663  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rread tag 0 count 0
I1206 10:50:11.096783  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tread tag 0 fid 1 offset 258 count 262120
I1206 10:50:11.096808  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rread tag 0 count 0
I1206 10:50:11.096961  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1206 10:50:11.096997  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rwalk tag 0 (c9d242 f3488b50 '') 
I1206 10:50:11.097117  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tstat tag 0 fid 2
I1206 10:50:11.097151  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d242 f3488b50 '') m 644 at 0 mt 1765018209 l 24 t 0 d 0 ext )
I1206 10:50:11.097289  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tstat tag 0 fid 2
I1206 10:50:11.097325  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d242 f3488b50 '') m 644 at 0 mt 1765018209 l 24 t 0 d 0 ext )
I1206 10:50:11.097450  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tclunk tag 0 fid 2
I1206 10:50:11.097470  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rclunk tag 0
I1206 10:50:11.097664  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Twalk tag 0 fid 0 newfid 2 0:'test-1765018209109782394' 
I1206 10:50:11.097721  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rwalk tag 0 (c9d244 f3488b50 '') 
I1206 10:50:11.097893  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tstat tag 0 fid 2
I1206 10:50:11.097931  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rstat tag 0 st ('test-1765018209109782394' 'jenkins' 'jenkins' '' q (c9d244 f3488b50 '') m 644 at 0 mt 1765018209 l 24 t 0 d 0 ext )
I1206 10:50:11.098083  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tstat tag 0 fid 2
I1206 10:50:11.098119  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rstat tag 0 st ('test-1765018209109782394' 'jenkins' 'jenkins' '' q (c9d244 f3488b50 '') m 644 at 0 mt 1765018209 l 24 t 0 d 0 ext )
I1206 10:50:11.098242  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tclunk tag 0 fid 2
I1206 10:50:11.098269  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rclunk tag 0
I1206 10:50:11.098500  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1206 10:50:11.098539  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rwalk tag 0 (c9d243 f3488b50 '') 
I1206 10:50:11.098658  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tstat tag 0 fid 2
I1206 10:50:11.098691  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d243 f3488b50 '') m 644 at 0 mt 1765018209 l 24 t 0 d 0 ext )
I1206 10:50:11.098825  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tstat tag 0 fid 2
I1206 10:50:11.098860  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d243 f3488b50 '') m 644 at 0 mt 1765018209 l 24 t 0 d 0 ext )
I1206 10:50:11.098985  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tclunk tag 0 fid 2
I1206 10:50:11.099011  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rclunk tag 0
I1206 10:50:11.099229  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tread tag 0 fid 1 offset 258 count 262120
I1206 10:50:11.099295  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rread tag 0 count 0
I1206 10:50:11.099477  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tclunk tag 0 fid 1
I1206 10:50:11.099517  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rclunk tag 0
I1206 10:50:11.101086  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1206 10:50:11.101158  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rerror tag 0 ename 'file not found' ecode 0
I1206 10:50:11.409760  545677 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44528 Tclunk tag 0 fid 0
I1206 10:50:11.409808  545677 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44528 Rclunk tag 0
I1206 10:50:11.410836  545677 main.go:127] stdlog: ufs.go:147 disconnected
I1206 10:50:11.433259  545677 out.go:179] * Unmounting /mount-9p ...
I1206 10:50:11.436190  545677 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1206 10:50:11.443109  545677 mount.go:180] unmount for /mount-9p ran successfully
I1206 10:50:11.443240  545677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/.mount-process: {Name:mk55ca06911ade27f2723f8b2a12a53ae692896f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 10:50:11.446472  545677 out.go:203] 
W1206 10:50:11.449398  545677 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1206 10:50:11.452253  545677 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.42s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.82s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-767261 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-767261 --output=json --user=testUser: exit status 80 (1.815140496s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cdf82641-028c-40a7-aea1-2e379aad32d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-767261 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"e2fcbaaf-6368-4831-a2ac-d987f568a894","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-06T11:04:35Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"3b645431-285a-43d5-9e54-0508304dd919","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-767261 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.82s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.09s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-767261 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-767261 --output=json --user=testUser: exit status 80 (2.088814954s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bcae8b33-94fb-4a5f-8fb2-7e8ff4b7bec1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-767261 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"b80b3f4c-6311-4a59-8cd0-25f70e4a86ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-06T11:04:37Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"088feadc-dc22-4a49-bc5a-585dcf8f0a6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-767261 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.09s)

                                                
                                    
x
+
TestKubernetesUpgrade (779.88s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-888189 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-888189 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.156788886s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-888189
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-888189: (1.362906589s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-888189 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-888189 status --format={{.Host}}: exit status 7 (80.721311ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-888189 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1206 11:24:59.010125  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:25:13.255217  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:26:18.673634  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-888189 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m21.60789481s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-888189] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-888189" primary control-plane node in "kubernetes-upgrade-888189" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:24:42.054665  675284 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:24:42.054818  675284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:24:42.054831  675284 out.go:374] Setting ErrFile to fd 2...
	I1206 11:24:42.054852  675284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:24:42.055302  675284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 11:24:42.055826  675284 out.go:368] Setting JSON to false
	I1206 11:24:42.056853  675284 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":14833,"bootTime":1765005449,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 11:24:42.056959  675284 start.go:143] virtualization:  
	I1206 11:24:42.060149  675284 out.go:179] * [kubernetes-upgrade-888189] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:24:42.062922  675284 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 11:24:42.062883  675284 notify.go:221] Checking for updates...
	I1206 11:24:42.070004  675284 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:24:42.073229  675284 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 11:24:42.076427  675284 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 11:24:42.079601  675284 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:24:42.094202  675284 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:24:42.100082  675284 config.go:182] Loaded profile config "kubernetes-upgrade-888189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1206 11:24:42.101136  675284 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:24:42.151766  675284 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:24:42.151962  675284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:24:42.224068  675284 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:24:42.211589062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:24:42.224205  675284 docker.go:319] overlay module found
	I1206 11:24:42.227392  675284 out.go:179] * Using the docker driver based on existing profile
	I1206 11:24:42.230363  675284 start.go:309] selected driver: docker
	I1206 11:24:42.230398  675284 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-888189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-888189 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:24:42.230514  675284 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:24:42.231465  675284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:24:42.303388  675284 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:24:42.292870739 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:24:42.303778  675284 cni.go:84] Creating CNI manager for ""
	I1206 11:24:42.303854  675284 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 11:24:42.303904  675284 start.go:353] cluster config:
	{Name:kubernetes-upgrade-888189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-888189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:24:42.309807  675284 out.go:179] * Starting "kubernetes-upgrade-888189" primary control-plane node in "kubernetes-upgrade-888189" cluster
	I1206 11:24:42.312604  675284 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 11:24:42.315677  675284 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:24:42.318596  675284 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 11:24:42.318647  675284 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 11:24:42.318671  675284 cache.go:65] Caching tarball of preloaded images
	I1206 11:24:42.318762  675284 preload.go:238] Found /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 11:24:42.318782  675284 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 11:24:42.318899  675284 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/kubernetes-upgrade-888189/config.json ...
	I1206 11:24:42.319205  675284 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:24:42.344366  675284 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:24:42.344394  675284 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:24:42.344416  675284 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:24:42.344452  675284 start.go:360] acquireMachinesLock for kubernetes-upgrade-888189: {Name:mkd2709caec5041d13fbe1ccd43b8bddb65e73b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:24:42.344528  675284 start.go:364] duration metric: took 48.458µs to acquireMachinesLock for "kubernetes-upgrade-888189"
	I1206 11:24:42.344554  675284 start.go:96] Skipping create...Using existing machine configuration
	I1206 11:24:42.344565  675284 fix.go:54] fixHost starting: 
	I1206 11:24:42.344839  675284 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-888189 --format={{.State.Status}}
	I1206 11:24:42.364118  675284 fix.go:112] recreateIfNeeded on kubernetes-upgrade-888189: state=Stopped err=<nil>
	W1206 11:24:42.364189  675284 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 11:24:42.367672  675284 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-888189" ...
	I1206 11:24:42.367782  675284 cli_runner.go:164] Run: docker start kubernetes-upgrade-888189
	I1206 11:24:42.653760  675284 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-888189 --format={{.State.Status}}
	I1206 11:24:42.680599  675284 kic.go:430] container "kubernetes-upgrade-888189" state is running.
	I1206 11:24:42.681008  675284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-888189
	I1206 11:24:42.712541  675284 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/kubernetes-upgrade-888189/config.json ...
	I1206 11:24:42.712780  675284 machine.go:94] provisionDockerMachine start ...
	I1206 11:24:42.712838  675284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-888189
	I1206 11:24:42.735754  675284 main.go:143] libmachine: Using SSH client type: native
	I1206 11:24:42.736082  675284 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1206 11:24:42.736097  675284 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:24:42.736802  675284 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 11:24:45.895255  675284 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-888189
	
	I1206 11:24:45.895284  675284 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-888189"
	I1206 11:24:45.895358  675284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-888189
	I1206 11:24:45.913746  675284 main.go:143] libmachine: Using SSH client type: native
	I1206 11:24:45.914185  675284 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1206 11:24:45.914248  675284 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-888189 && echo "kubernetes-upgrade-888189" | sudo tee /etc/hostname
	I1206 11:24:46.081012  675284 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-888189
	
	I1206 11:24:46.081115  675284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-888189
	I1206 11:24:46.098612  675284 main.go:143] libmachine: Using SSH client type: native
	I1206 11:24:46.098932  675284 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1206 11:24:46.098960  675284 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-888189' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-888189/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-888189' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:24:46.252411  675284 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:24:46.252437  675284 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-484819/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-484819/.minikube}
	I1206 11:24:46.252461  675284 ubuntu.go:190] setting up certificates
	I1206 11:24:46.252470  675284 provision.go:84] configureAuth start
	I1206 11:24:46.252531  675284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-888189
	I1206 11:24:46.272258  675284 provision.go:143] copyHostCerts
	I1206 11:24:46.272341  675284 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem, removing ...
	I1206 11:24:46.272362  675284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 11:24:46.272443  675284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem (1123 bytes)
	I1206 11:24:46.272598  675284 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem, removing ...
	I1206 11:24:46.272610  675284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 11:24:46.272640  675284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem (1675 bytes)
	I1206 11:24:46.272712  675284 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem, removing ...
	I1206 11:24:46.272723  675284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 11:24:46.272748  675284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem (1082 bytes)
	I1206 11:24:46.272818  675284 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-888189 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-888189 localhost minikube]
	I1206 11:24:46.443608  675284 provision.go:177] copyRemoteCerts
	I1206 11:24:46.443678  675284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:24:46.443723  675284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-888189
	I1206 11:24:46.463410  675284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/kubernetes-upgrade-888189/id_rsa Username:docker}
	I1206 11:24:46.571057  675284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:24:46.588936  675284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1206 11:24:46.609843  675284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 11:24:46.629567  675284 provision.go:87] duration metric: took 377.07226ms to configureAuth
	I1206 11:24:46.629595  675284 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:24:46.629826  675284 config.go:182] Loaded profile config "kubernetes-upgrade-888189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 11:24:46.629947  675284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-888189
	I1206 11:24:46.647153  675284 main.go:143] libmachine: Using SSH client type: native
	I1206 11:24:46.647492  675284 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1206 11:24:46.647511  675284 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 11:24:46.993435  675284 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 11:24:46.993456  675284 machine.go:97] duration metric: took 4.280667036s to provisionDockerMachine
	I1206 11:24:46.993467  675284 start.go:293] postStartSetup for "kubernetes-upgrade-888189" (driver="docker")
	I1206 11:24:46.993479  675284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:24:46.993548  675284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:24:46.993598  675284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-888189
	I1206 11:24:47.015793  675284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/kubernetes-upgrade-888189/id_rsa Username:docker}
	I1206 11:24:47.124679  675284 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:24:47.128846  675284 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:24:47.128877  675284 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:24:47.128889  675284 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/addons for local assets ...
	I1206 11:24:47.128946  675284 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/files for local assets ...
	I1206 11:24:47.129028  675284 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> 4880682.pem in /etc/ssl/certs
	I1206 11:24:47.129201  675284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:24:47.138228  675284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 11:24:47.157865  675284 start.go:296] duration metric: took 164.381702ms for postStartSetup
	I1206 11:24:47.157946  675284 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:24:47.157993  675284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-888189
	I1206 11:24:47.175816  675284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/kubernetes-upgrade-888189/id_rsa Username:docker}
	I1206 11:24:47.280484  675284 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:24:47.285336  675284 fix.go:56] duration metric: took 4.940763205s for fixHost
	I1206 11:24:47.285361  675284 start.go:83] releasing machines lock for "kubernetes-upgrade-888189", held for 4.940819286s
	I1206 11:24:47.285436  675284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-888189
	I1206 11:24:47.302506  675284 ssh_runner.go:195] Run: cat /version.json
	I1206 11:24:47.302563  675284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-888189
	I1206 11:24:47.302621  675284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:24:47.302677  675284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-888189
	I1206 11:24:47.324157  675284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/kubernetes-upgrade-888189/id_rsa Username:docker}
	I1206 11:24:47.343248  675284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/kubernetes-upgrade-888189/id_rsa Username:docker}
	I1206 11:24:47.436213  675284 ssh_runner.go:195] Run: systemctl --version
	I1206 11:24:47.525507  675284 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 11:24:47.562232  675284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:24:47.567329  675284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:24:47.567416  675284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:24:47.575377  675284 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 11:24:47.575403  675284 start.go:496] detecting cgroup driver to use...
	I1206 11:24:47.575448  675284 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:24:47.575499  675284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 11:24:47.590718  675284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 11:24:47.605057  675284 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:24:47.605150  675284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:24:47.621529  675284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:24:47.636158  675284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:24:47.759035  675284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:24:47.875439  675284 docker.go:234] disabling docker service ...
	I1206 11:24:47.875504  675284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:24:47.892155  675284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:24:47.907796  675284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:24:48.035907  675284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:24:48.152861  675284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:24:48.166673  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:24:48.182913  675284 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 11:24:48.182993  675284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:24:48.193134  675284 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 11:24:48.193234  675284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:24:48.203526  675284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:24:48.213644  675284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:24:48.223197  675284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:24:48.231932  675284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:24:48.241337  675284 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:24:48.254179  675284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:24:48.263408  675284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:24:48.271599  675284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:24:48.279349  675284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:24:48.400921  675284 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 11:24:48.578604  675284 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 11:24:48.578676  675284 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 11:24:48.582774  675284 start.go:564] Will wait 60s for crictl version
	I1206 11:24:48.582858  675284 ssh_runner.go:195] Run: which crictl
	I1206 11:24:48.586517  675284 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:24:48.611466  675284 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 11:24:48.611634  675284 ssh_runner.go:195] Run: crio --version
	I1206 11:24:48.644082  675284 ssh_runner.go:195] Run: crio --version
	I1206 11:24:48.681148  675284 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 11:24:48.684003  675284 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-888189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:24:48.707599  675284 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1206 11:24:48.711841  675284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:24:48.722646  675284 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-888189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-888189 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:24:48.722793  675284 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 11:24:48.722859  675284 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:24:48.768730  675284 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1206 11:24:48.768814  675284 ssh_runner.go:195] Run: which lz4
	I1206 11:24:48.772858  675284 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1206 11:24:48.776477  675284 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 11:24:48.776512  675284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (306100841 bytes)
	I1206 11:24:50.307165  675284 crio.go:462] duration metric: took 1.534353869s to copy over tarball
	I1206 11:24:50.307314  675284 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 11:24:52.273142  675284 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.965788245s)
	I1206 11:24:52.273178  675284 crio.go:469] duration metric: took 1.965974842s to extract the tarball
	I1206 11:24:52.273187  675284 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 11:24:52.332166  675284 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:24:52.364129  675284 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 11:24:52.364151  675284 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:24:52.364157  675284 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1206 11:24:52.364261  675284 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-888189 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-888189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:24:52.364337  675284 ssh_runner.go:195] Run: crio config
	I1206 11:24:52.447557  675284 cni.go:84] Creating CNI manager for ""
	I1206 11:24:52.447584  675284 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 11:24:52.447614  675284 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 11:24:52.447638  675284 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-888189 NodeName:kubernetes-upgrade-888189 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:24:52.447812  675284 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-888189"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:24:52.447890  675284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 11:24:52.455503  675284 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:24:52.455573  675284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:24:52.463112  675284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1206 11:24:52.476542  675284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 11:24:52.489333  675284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1206 11:24:52.503117  675284 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:24:52.506754  675284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:24:52.516755  675284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:24:52.638958  675284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:24:52.654746  675284 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/kubernetes-upgrade-888189 for IP: 192.168.76.2
	I1206 11:24:52.654765  675284 certs.go:195] generating shared ca certs ...
	I1206 11:24:52.654781  675284 certs.go:227] acquiring lock for ca certs: {Name:mk654f77abd8383620ce6ddae56f2a6a8c1d96d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:24:52.654928  675284 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key
	I1206 11:24:52.654984  675284 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key
	I1206 11:24:52.654996  675284 certs.go:257] generating profile certs ...
	I1206 11:24:52.655080  675284 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/kubernetes-upgrade-888189/client.key
	I1206 11:24:52.655265  675284 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/kubernetes-upgrade-888189/apiserver.key.a75dadd6
	I1206 11:24:52.655323  675284 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/kubernetes-upgrade-888189/proxy-client.key
	I1206 11:24:52.655446  675284 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem (1338 bytes)
	W1206 11:24:52.655479  675284 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068_empty.pem, impossibly tiny 0 bytes
	I1206 11:24:52.655488  675284 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 11:24:52.655514  675284 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:24:52.655537  675284 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:24:52.655564  675284 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem (1675 bytes)
	I1206 11:24:52.655620  675284 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 11:24:52.656195  675284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:24:52.678396  675284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:24:52.714402  675284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:24:52.763527  675284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 11:24:52.783959  675284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/kubernetes-upgrade-888189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1206 11:24:52.805590  675284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/kubernetes-upgrade-888189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 11:24:52.825639  675284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/kubernetes-upgrade-888189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:24:52.844489  675284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/kubernetes-upgrade-888189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 11:24:52.861918  675284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem --> /usr/share/ca-certificates/488068.pem (1338 bytes)
	I1206 11:24:52.880103  675284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /usr/share/ca-certificates/4880682.pem (1708 bytes)
	I1206 11:24:52.899548  675284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:24:52.917372  675284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:24:52.930585  675284 ssh_runner.go:195] Run: openssl version
	I1206 11:24:52.937560  675284 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488068.pem
	I1206 11:24:52.946437  675284 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488068.pem /etc/ssl/certs/488068.pem
	I1206 11:24:52.954185  675284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488068.pem
	I1206 11:24:52.958293  675284 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 11:24:52.958374  675284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488068.pem
	I1206 11:24:53.003455  675284 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:24:53.012244  675284 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4880682.pem
	I1206 11:24:53.019792  675284 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4880682.pem /etc/ssl/certs/4880682.pem
	I1206 11:24:53.028353  675284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4880682.pem
	I1206 11:24:53.032274  675284 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 11:24:53.032381  675284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4880682.pem
	I1206 11:24:53.074578  675284 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:24:53.083981  675284 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:24:53.091920  675284 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:24:53.100606  675284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:24:53.104917  675284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:24:53.105064  675284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:24:53.146683  675284 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:24:53.154369  675284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:24:53.158302  675284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 11:24:53.199995  675284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 11:24:53.241766  675284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 11:24:53.283275  675284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 11:24:53.324473  675284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 11:24:53.379561  675284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 11:24:53.434771  675284 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-888189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-888189 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:24:53.434919  675284 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 11:24:53.435037  675284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:24:53.472722  675284 cri.go:89] found id: ""
	I1206 11:24:53.472809  675284 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:24:53.481111  675284 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 11:24:53.481175  675284 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 11:24:53.481258  675284 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 11:24:53.493335  675284 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:24:53.493993  675284 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-888189" does not appear in /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 11:24:53.494251  675284 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-484819/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-888189" cluster setting kubeconfig missing "kubernetes-upgrade-888189" context setting]
	I1206 11:24:53.494759  675284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/kubeconfig: {Name:mk884a72161ed5cd0cfdbffc4a21f277282d705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:24:53.495487  675284 kapi.go:59] client config for kubernetes-upgrade-888189: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/kubernetes-upgrade-888189/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/kubernetes-upgrade-888189/client.key", CAFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 11:24:53.496006  675284 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 11:24:53.496023  675284 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 11:24:53.496029  675284 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 11:24:53.496034  675284 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 11:24:53.496038  675284 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 11:24:53.496303  675284 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 11:24:53.505097  675284 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 11:24:23.244807163 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 11:24:52.496230022 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-888189"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1206 11:24:53.505117  675284 kubeadm.go:1161] stopping kube-system containers ...
	I1206 11:24:53.505129  675284 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 11:24:53.505228  675284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:24:53.532353  675284 cri.go:89] found id: ""
	I1206 11:24:53.532436  675284 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 11:24:53.546565  675284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:24:53.555051  675284 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec  6 11:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec  6 11:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec  6 11:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec  6 11:24 /etc/kubernetes/scheduler.conf
	
	I1206 11:24:53.555225  675284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:24:53.563407  675284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:24:53.571598  675284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:24:53.579877  675284 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:24:53.579987  675284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:24:53.587595  675284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:24:53.595225  675284 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:24:53.595319  675284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:24:53.602706  675284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 11:24:53.610723  675284 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 11:24:53.661058  675284 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 11:24:55.921729  675284 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.26062911s)
	I1206 11:24:55.921830  675284 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 11:24:56.142827  675284 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 11:24:56.207910  675284 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 11:24:56.250946  675284 api_server.go:52] waiting for apiserver process to appear ...
	I1206 11:24:56.251027  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:24:56.752005  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:24:57.251827  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:24:57.751320  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:24:58.252096  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:24:58.751525  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:24:59.252003  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:24:59.751960  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:00.251814  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:00.751696  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:01.251142  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:01.751516  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:02.251601  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:02.751543  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:03.251346  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:03.751518  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:04.251118  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:04.751176  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:05.251298  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:05.751509  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:06.251403  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:06.751275  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:07.251213  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:07.751258  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:08.251251  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:08.751732  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:09.251761  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:09.751317  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:10.251320  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:10.751584  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:11.251285  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:11.751325  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:12.252153  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:12.752125  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:13.251835  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:13.751541  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:14.251276  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:14.751179  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:15.252121  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:15.751285  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:16.251084  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:16.751175  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:17.252103  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:17.751204  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:18.251215  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:18.751708  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:19.251829  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:19.752033  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:20.252078  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:20.751858  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:21.251234  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:21.752056  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:22.251707  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:22.751191  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:23.252018  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:23.751191  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:24.252161  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:24.752053  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:25.251394  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:25.751207  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:26.251745  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:26.751899  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:27.252162  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:27.751298  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:28.251304  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:28.751238  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:29.251978  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:29.752135  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:30.252173  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:30.751285  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:31.252023  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:31.751319  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:32.251205  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:32.751198  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:33.252010  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:33.751779  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:34.251406  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:34.751204  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:35.251748  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:35.751552  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:36.251213  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:36.752063  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:37.251952  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:37.751873  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:38.251342  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:38.751989  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:39.251252  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:39.751233  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:40.251856  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:40.751832  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:41.252034  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:41.751263  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:42.251253  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:42.751901  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:43.251919  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:43.751959  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:44.251295  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:44.751793  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:45.251805  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:45.751301  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:46.251260  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:46.752062  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:47.251154  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:47.752005  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:48.251916  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:48.752016  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:49.251754  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:49.751141  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:50.251891  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:50.752111  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:51.252030  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:51.751687  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:52.251983  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:52.751578  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:53.251324  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:53.751312  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:54.251221  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:54.751814  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:55.251360  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:55.751202  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:56.252040  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:25:56.252147  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:25:56.285064  675284 cri.go:89] found id: ""
	I1206 11:25:56.285088  675284 logs.go:282] 0 containers: []
	W1206 11:25:56.285098  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:25:56.285105  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:25:56.285164  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:25:56.315920  675284 cri.go:89] found id: ""
	I1206 11:25:56.315945  675284 logs.go:282] 0 containers: []
	W1206 11:25:56.315954  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:25:56.315964  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:25:56.316023  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:25:56.346169  675284 cri.go:89] found id: ""
	I1206 11:25:56.346195  675284 logs.go:282] 0 containers: []
	W1206 11:25:56.346204  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:25:56.346211  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:25:56.346269  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:25:56.371615  675284 cri.go:89] found id: ""
	I1206 11:25:56.371642  675284 logs.go:282] 0 containers: []
	W1206 11:25:56.371651  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:25:56.371658  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:25:56.371713  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:25:56.404148  675284 cri.go:89] found id: ""
	I1206 11:25:56.404171  675284 logs.go:282] 0 containers: []
	W1206 11:25:56.404180  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:25:56.404186  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:25:56.404246  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:25:56.431950  675284 cri.go:89] found id: ""
	I1206 11:25:56.431984  675284 logs.go:282] 0 containers: []
	W1206 11:25:56.431993  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:25:56.432000  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:25:56.432126  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:25:56.460509  675284 cri.go:89] found id: ""
	I1206 11:25:56.460537  675284 logs.go:282] 0 containers: []
	W1206 11:25:56.460547  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:25:56.460554  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:25:56.460614  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:25:56.490023  675284 cri.go:89] found id: ""
	I1206 11:25:56.490049  675284 logs.go:282] 0 containers: []
	W1206 11:25:56.490058  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:25:56.490068  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:25:56.490079  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:25:56.563016  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:25:56.563036  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:25:56.563051  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:25:56.593194  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:25:56.593228  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:25:56.626913  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:25:56.626949  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:25:56.701950  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:25:56.701986  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:25:59.223259  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:25:59.233902  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:25:59.233979  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:25:59.261505  675284 cri.go:89] found id: ""
	I1206 11:25:59.261534  675284 logs.go:282] 0 containers: []
	W1206 11:25:59.261549  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:25:59.261556  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:25:59.261615  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:25:59.286892  675284 cri.go:89] found id: ""
	I1206 11:25:59.286920  675284 logs.go:282] 0 containers: []
	W1206 11:25:59.286929  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:25:59.286936  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:25:59.286999  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:25:59.317168  675284 cri.go:89] found id: ""
	I1206 11:25:59.317196  675284 logs.go:282] 0 containers: []
	W1206 11:25:59.317216  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:25:59.317240  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:25:59.317320  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:25:59.345958  675284 cri.go:89] found id: ""
	I1206 11:25:59.345984  675284 logs.go:282] 0 containers: []
	W1206 11:25:59.345993  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:25:59.346000  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:25:59.346062  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:25:59.371951  675284 cri.go:89] found id: ""
	I1206 11:25:59.372030  675284 logs.go:282] 0 containers: []
	W1206 11:25:59.372044  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:25:59.372052  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:25:59.372114  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:25:59.403531  675284 cri.go:89] found id: ""
	I1206 11:25:59.403557  675284 logs.go:282] 0 containers: []
	W1206 11:25:59.403566  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:25:59.403572  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:25:59.403630  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:25:59.429320  675284 cri.go:89] found id: ""
	I1206 11:25:59.429347  675284 logs.go:282] 0 containers: []
	W1206 11:25:59.429355  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:25:59.429362  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:25:59.429439  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:25:59.464457  675284 cri.go:89] found id: ""
	I1206 11:25:59.464484  675284 logs.go:282] 0 containers: []
	W1206 11:25:59.464493  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:25:59.464503  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:25:59.464515  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:25:59.535247  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:25:59.535286  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:25:59.552641  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:25:59.552672  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:25:59.615936  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:25:59.615958  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:25:59.615971  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:25:59.649480  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:25:59.649519  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:26:02.183928  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:26:02.196391  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:26:02.196461  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:26:02.234017  675284 cri.go:89] found id: ""
	I1206 11:26:02.234056  675284 logs.go:282] 0 containers: []
	W1206 11:26:02.234065  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:26:02.234072  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:26:02.234134  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:26:02.268226  675284 cri.go:89] found id: ""
	I1206 11:26:02.268250  675284 logs.go:282] 0 containers: []
	W1206 11:26:02.268265  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:26:02.268271  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:26:02.268329  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:26:02.294349  675284 cri.go:89] found id: ""
	I1206 11:26:02.294375  675284 logs.go:282] 0 containers: []
	W1206 11:26:02.294385  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:26:02.294391  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:26:02.294452  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:26:02.320864  675284 cri.go:89] found id: ""
	I1206 11:26:02.320890  675284 logs.go:282] 0 containers: []
	W1206 11:26:02.320899  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:26:02.320906  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:26:02.320995  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:26:02.346124  675284 cri.go:89] found id: ""
	I1206 11:26:02.346151  675284 logs.go:282] 0 containers: []
	W1206 11:26:02.346159  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:26:02.346165  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:26:02.346246  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:26:02.372483  675284 cri.go:89] found id: ""
	I1206 11:26:02.372562  675284 logs.go:282] 0 containers: []
	W1206 11:26:02.372585  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:26:02.372606  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:26:02.372704  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:26:02.398287  675284 cri.go:89] found id: ""
	I1206 11:26:02.398312  675284 logs.go:282] 0 containers: []
	W1206 11:26:02.398321  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:26:02.398327  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:26:02.398384  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:26:02.427612  675284 cri.go:89] found id: ""
	I1206 11:26:02.427635  675284 logs.go:282] 0 containers: []
	W1206 11:26:02.427643  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:26:02.427652  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:26:02.427664  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:26:02.494466  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:26:02.494503  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:26:02.511516  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:26:02.511546  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:26:02.577515  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:26:02.577540  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:26:02.577553  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:26:02.608148  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:26:02.608179  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:26:05.143908  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:26:05.154230  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:26:05.154321  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:26:05.191339  675284 cri.go:89] found id: ""
	I1206 11:26:05.191366  675284 logs.go:282] 0 containers: []
	W1206 11:26:05.191376  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:26:05.191383  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:26:05.191443  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:26:05.228017  675284 cri.go:89] found id: ""
	I1206 11:26:05.228041  675284 logs.go:282] 0 containers: []
	W1206 11:26:05.228050  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:26:05.228056  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:26:05.228117  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:26:05.256432  675284 cri.go:89] found id: ""
	I1206 11:26:05.256458  675284 logs.go:282] 0 containers: []
	W1206 11:26:05.256467  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:26:05.256474  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:26:05.256548  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:26:05.282554  675284 cri.go:89] found id: ""
	I1206 11:26:05.282581  675284 logs.go:282] 0 containers: []
	W1206 11:26:05.282590  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:26:05.282599  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:26:05.282657  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:26:05.308212  675284 cri.go:89] found id: ""
	I1206 11:26:05.308236  675284 logs.go:282] 0 containers: []
	W1206 11:26:05.308244  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:26:05.308250  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:26:05.308309  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:26:05.334576  675284 cri.go:89] found id: ""
	I1206 11:26:05.334602  675284 logs.go:282] 0 containers: []
	W1206 11:26:05.334613  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:26:05.334619  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:26:05.334679  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:26:05.364951  675284 cri.go:89] found id: ""
	I1206 11:26:05.364985  675284 logs.go:282] 0 containers: []
	W1206 11:26:05.364997  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:26:05.365010  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:26:05.365083  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:26:05.395184  675284 cri.go:89] found id: ""
	I1206 11:26:05.395211  675284 logs.go:282] 0 containers: []
	W1206 11:26:05.395221  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:26:05.395230  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:26:05.395241  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:26:05.462672  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:26:05.462705  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:26:05.480058  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:26:05.480087  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:26:05.546086  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:26:05.546151  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:26:05.546169  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:26:05.576274  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:26:05.576312  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:26:08.105158  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:26:08.115893  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:26:08.115962  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:26:08.144304  675284 cri.go:89] found id: ""
	I1206 11:26:08.144327  675284 logs.go:282] 0 containers: []
	W1206 11:26:08.144336  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:26:08.144342  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:26:08.144402  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:26:08.168931  675284 cri.go:89] found id: ""
	I1206 11:26:08.168954  675284 logs.go:282] 0 containers: []
	W1206 11:26:08.168963  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:26:08.168969  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:26:08.169030  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:26:08.210050  675284 cri.go:89] found id: ""
	I1206 11:26:08.210077  675284 logs.go:282] 0 containers: []
	W1206 11:26:08.210086  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:26:08.210092  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:26:08.210155  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:26:08.241804  675284 cri.go:89] found id: ""
	I1206 11:26:08.241827  675284 logs.go:282] 0 containers: []
	W1206 11:26:08.241837  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:26:08.241844  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:26:08.241908  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:26:08.274829  675284 cri.go:89] found id: ""
	I1206 11:26:08.274853  675284 logs.go:282] 0 containers: []
	W1206 11:26:08.274863  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:26:08.274870  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:26:08.274930  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:26:08.301724  675284 cri.go:89] found id: ""
	I1206 11:26:08.301805  675284 logs.go:282] 0 containers: []
	W1206 11:26:08.301828  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:26:08.301849  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:26:08.301961  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:26:08.327752  675284 cri.go:89] found id: ""
	I1206 11:26:08.327784  675284 logs.go:282] 0 containers: []
	W1206 11:26:08.327792  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:26:08.327799  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:26:08.327878  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:26:08.355494  675284 cri.go:89] found id: ""
	I1206 11:26:08.355518  675284 logs.go:282] 0 containers: []
	W1206 11:26:08.355526  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:26:08.355535  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:26:08.355546  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:26:08.422811  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:26:08.422850  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:26:08.441004  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:26:08.441034  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:26:08.507474  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:26:08.507497  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:26:08.507510  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:26:08.538829  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:26:08.538866  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:26:11.071279  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:26:11.082106  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:26:11.082227  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:26:11.108239  675284 cri.go:89] found id: ""
	I1206 11:26:11.108312  675284 logs.go:282] 0 containers: []
	W1206 11:26:11.108341  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:26:11.108361  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:26:11.108459  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:26:11.136055  675284 cri.go:89] found id: ""
	I1206 11:26:11.136084  675284 logs.go:282] 0 containers: []
	W1206 11:26:11.136094  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:26:11.136100  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:26:11.136205  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:26:11.166458  675284 cri.go:89] found id: ""
	I1206 11:26:11.166487  675284 logs.go:282] 0 containers: []
	W1206 11:26:11.166495  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:26:11.166501  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:26:11.166563  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:26:11.207331  675284 cri.go:89] found id: ""
	I1206 11:26:11.207353  675284 logs.go:282] 0 containers: []
	W1206 11:26:11.207361  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:26:11.207367  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:26:11.207454  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:26:11.251718  675284 cri.go:89] found id: ""
	I1206 11:26:11.251747  675284 logs.go:282] 0 containers: []
	W1206 11:26:11.251758  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:26:11.251764  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:26:11.251827  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:26:11.277395  675284 cri.go:89] found id: ""
	I1206 11:26:11.277422  675284 logs.go:282] 0 containers: []
	W1206 11:26:11.277431  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:26:11.277438  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:26:11.277503  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:26:11.303979  675284 cri.go:89] found id: ""
	I1206 11:26:11.304003  675284 logs.go:282] 0 containers: []
	W1206 11:26:11.304012  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:26:11.304019  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:26:11.304078  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:26:11.330597  675284 cri.go:89] found id: ""
	I1206 11:26:11.330623  675284 logs.go:282] 0 containers: []
	W1206 11:26:11.330632  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:26:11.330642  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:26:11.330660  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:26:11.397533  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:26:11.397569  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:26:11.414859  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:26:11.414888  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:26:11.480621  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:26:11.480643  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:26:11.480656  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:26:11.512480  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:26:11.512520  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:26:14.043727  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:26:14.054112  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:26:14.054185  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:26:14.084060  675284 cri.go:89] found id: ""
	I1206 11:26:14.084134  675284 logs.go:282] 0 containers: []
	W1206 11:26:14.084157  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:26:14.084177  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:26:14.084270  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:26:14.111256  675284 cri.go:89] found id: ""
	I1206 11:26:14.111285  675284 logs.go:282] 0 containers: []
	W1206 11:26:14.111294  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:26:14.111301  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:26:14.111409  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:26:14.138008  675284 cri.go:89] found id: ""
	I1206 11:26:14.138091  675284 logs.go:282] 0 containers: []
	W1206 11:26:14.138106  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:26:14.138113  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:26:14.138182  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:26:14.163390  675284 cri.go:89] found id: ""
	I1206 11:26:14.163417  675284 logs.go:282] 0 containers: []
	W1206 11:26:14.163426  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:26:14.163433  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:26:14.163495  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:26:14.202781  675284 cri.go:89] found id: ""
	I1206 11:26:14.202814  675284 logs.go:282] 0 containers: []
	W1206 11:26:14.202824  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:26:14.202830  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:26:14.202890  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:26:14.232264  675284 cri.go:89] found id: ""
	I1206 11:26:14.232290  675284 logs.go:282] 0 containers: []
	W1206 11:26:14.232299  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:26:14.232305  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:26:14.232363  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:26:14.260188  675284 cri.go:89] found id: ""
	I1206 11:26:14.260215  675284 logs.go:282] 0 containers: []
	W1206 11:26:14.260225  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:26:14.260231  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:26:14.260308  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:26:14.286674  675284 cri.go:89] found id: ""
	I1206 11:26:14.286697  675284 logs.go:282] 0 containers: []
	W1206 11:26:14.286706  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:26:14.286715  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:26:14.286746  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:26:14.355300  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:26:14.355338  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:26:14.371828  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:26:14.371857  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:26:14.438390  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:26:14.438412  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:26:14.438426  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:26:14.469613  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:26:14.469651  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:26:17.002894  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:26:17.014699  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:26:17.014780  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:26:17.044786  675284 cri.go:89] found id: ""
	I1206 11:26:17.044814  675284 logs.go:282] 0 containers: []
	W1206 11:26:17.044824  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:26:17.044831  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:26:17.044889  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:26:17.080906  675284 cri.go:89] found id: ""
	I1206 11:26:17.080933  675284 logs.go:282] 0 containers: []
	W1206 11:26:17.080942  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:26:17.080948  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:26:17.081011  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:26:17.109767  675284 cri.go:89] found id: ""
	I1206 11:26:17.109790  675284 logs.go:282] 0 containers: []
	W1206 11:26:17.109798  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:26:17.109804  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:26:17.109864  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:26:17.135212  675284 cri.go:89] found id: ""
	I1206 11:26:17.135280  675284 logs.go:282] 0 containers: []
	W1206 11:26:17.135306  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:26:17.135322  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:26:17.135396  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:26:17.171908  675284 cri.go:89] found id: ""
	I1206 11:26:17.171937  675284 logs.go:282] 0 containers: []
	W1206 11:26:17.171960  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:26:17.171967  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:26:17.172049  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:26:17.202951  675284 cri.go:89] found id: ""
	I1206 11:26:17.202977  675284 logs.go:282] 0 containers: []
	W1206 11:26:17.202986  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:26:17.202998  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:26:17.203059  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:26:17.237354  675284 cri.go:89] found id: ""
	I1206 11:26:17.237382  675284 logs.go:282] 0 containers: []
	W1206 11:26:17.237390  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:26:17.237397  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:26:17.237455  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:26:17.268637  675284 cri.go:89] found id: ""
	I1206 11:26:17.268661  675284 logs.go:282] 0 containers: []
	W1206 11:26:17.268670  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:26:17.268680  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:26:17.268691  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:26:17.339104  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:26:17.339168  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:26:17.356066  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:26:17.356095  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:26:17.418438  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:26:17.418457  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:26:17.418470  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:26:17.448896  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:26:17.448930  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:26:19.979254  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:26:19.989589  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:26:19.989664  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:26:20.023384  675284 cri.go:89] found id: ""
	I1206 11:26:20.023413  675284 logs.go:282] 0 containers: []
	W1206 11:26:20.023422  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:26:20.023430  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:26:20.023495  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:26:20.052646  675284 cri.go:89] found id: ""
	I1206 11:26:20.052683  675284 logs.go:282] 0 containers: []
	W1206 11:26:20.052693  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:26:20.052699  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:26:20.052761  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:26:20.078983  675284 cri.go:89] found id: ""
	I1206 11:26:20.079007  675284 logs.go:282] 0 containers: []
	W1206 11:26:20.079015  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:26:20.079032  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:26:20.079094  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:26:20.107420  675284 cri.go:89] found id: ""
	I1206 11:26:20.107442  675284 logs.go:282] 0 containers: []
	W1206 11:26:20.107451  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:26:20.107457  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:26:20.107513  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:26:20.138282  675284 cri.go:89] found id: ""
	I1206 11:26:20.138354  675284 logs.go:282] 0 containers: []
	W1206 11:26:20.138377  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:26:20.138396  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:26:20.138487  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:26:20.166102  675284 cri.go:89] found id: ""
	I1206 11:26:20.166172  675284 logs.go:282] 0 containers: []
	W1206 11:26:20.166187  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:26:20.166195  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:26:20.166264  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:26:20.201112  675284 cri.go:89] found id: ""
	I1206 11:26:20.201136  675284 logs.go:282] 0 containers: []
	W1206 11:26:20.201145  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:26:20.201151  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:26:20.201216  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:26:20.230837  675284 cri.go:89] found id: ""
	I1206 11:26:20.230862  675284 logs.go:282] 0 containers: []
	W1206 11:26:20.230871  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:26:20.230880  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:26:20.230898  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:26:20.311058  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:26:20.311095  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:26:20.327767  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:26:20.327839  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:26:20.397878  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:26:20.397898  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:26:20.397911  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:26:20.427923  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:26:20.427958  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:26:22.956217  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:26:22.966434  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:26:22.966505  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:26:22.993319  675284 cri.go:89] found id: ""
	I1206 11:26:22.993351  675284 logs.go:282] 0 containers: []
	W1206 11:26:22.993361  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:26:22.993367  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:26:22.993429  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:26:23.024109  675284 cri.go:89] found id: ""
	I1206 11:26:23.024142  675284 logs.go:282] 0 containers: []
	W1206 11:26:23.024152  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:26:23.024158  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:26:23.024218  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:26:23.054121  675284 cri.go:89] found id: ""
	I1206 11:26:23.054144  675284 logs.go:282] 0 containers: []
	W1206 11:26:23.054152  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:26:23.054158  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:26:23.054218  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:26:23.082084  675284 cri.go:89] found id: ""
	I1206 11:26:23.082110  675284 logs.go:282] 0 containers: []
	W1206 11:26:23.082119  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:26:23.082126  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:26:23.082188  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:26:23.113893  675284 cri.go:89] found id: ""
	I1206 11:26:23.113918  675284 logs.go:282] 0 containers: []
	W1206 11:26:23.113927  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:26:23.113933  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:26:23.113995  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:26:23.141773  675284 cri.go:89] found id: ""
	I1206 11:26:23.141798  675284 logs.go:282] 0 containers: []
	W1206 11:26:23.141807  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:26:23.141814  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:26:23.141873  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:26:23.167962  675284 cri.go:89] found id: ""
	I1206 11:26:23.167987  675284 logs.go:282] 0 containers: []
	W1206 11:26:23.167996  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:26:23.168003  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:26:23.168062  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:26:23.212558  675284 cri.go:89] found id: ""
	I1206 11:26:23.212629  675284 logs.go:282] 0 containers: []
	W1206 11:26:23.212666  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:26:23.212695  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:26:23.212723  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:26:23.284101  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:26:23.284139  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:26:23.300930  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:26:23.300963  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:26:23.363737  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:26:23.363770  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:26:23.363796  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:26:23.397162  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:26:23.397190  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:26:25.929438  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:26:25.940181  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:26:25.940276  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:26:25.969654  675284 cri.go:89] found id: ""
	I1206 11:26:25.969681  675284 logs.go:282] 0 containers: []
	W1206 11:26:25.969689  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:26:25.969697  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:26:25.969761  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:26:25.999455  675284 cri.go:89] found id: ""
	I1206 11:26:25.999480  675284 logs.go:282] 0 containers: []
	W1206 11:26:25.999489  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:26:25.999496  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:26:25.999555  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:26:26.029565  675284 cri.go:89] found id: ""
	I1206 11:26:26.029591  675284 logs.go:282] 0 containers: []
	W1206 11:26:26.029601  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:26:26.029608  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:26:26.029670  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:26:26.061233  675284 cri.go:89] found id: ""
	I1206 11:26:26.061262  675284 logs.go:282] 0 containers: []
	W1206 11:26:26.061271  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:26:26.061280  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:26:26.061342  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:26:26.091770  675284 cri.go:89] found id: ""
	I1206 11:26:26.091797  675284 logs.go:282] 0 containers: []
	W1206 11:26:26.091807  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:26:26.091814  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:26:26.091881  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:26:26.120547  675284 cri.go:89] found id: ""
	I1206 11:26:26.120573  675284 logs.go:282] 0 containers: []
	W1206 11:26:26.120583  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:26:26.120589  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:26:26.120647  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:26:26.148594  675284 cri.go:89] found id: ""
	I1206 11:26:26.148624  675284 logs.go:282] 0 containers: []
	W1206 11:26:26.148634  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:26:26.148640  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:26:26.148764  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:26:26.179531  675284 cri.go:89] found id: ""
	I1206 11:26:26.179557  675284 logs.go:282] 0 containers: []
	W1206 11:26:26.179567  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:26:26.179577  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:26:26.179588  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:26:26.214225  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:26:26.214263  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:26:26.251991  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:26:26.252016  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:26:26.322658  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:26:26.322694  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:26:26.339507  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:26:26.339538  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:26:26.408606  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:26:28.908801  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:26:28.918832  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:26:28.918908  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:26:28.946319  675284 cri.go:89] found id: ""
	I1206 11:26:28.946347  675284 logs.go:282] 0 containers: []
	W1206 11:26:28.946356  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:26:28.946363  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:26:28.946424  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:26:28.979114  675284 cri.go:89] found id: ""
	I1206 11:26:28.979167  675284 logs.go:282] 0 containers: []
	W1206 11:26:28.979176  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:26:28.979182  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:26:28.979244  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:26:29.009266  675284 cri.go:89] found id: ""
	I1206 11:26:29.009294  675284 logs.go:282] 0 containers: []
	W1206 11:26:29.009303  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:26:29.009319  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:26:29.009431  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:26:29.037570  675284 cri.go:89] found id: ""
	I1206 11:26:29.037596  675284 logs.go:282] 0 containers: []
	W1206 11:26:29.037605  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:26:29.037611  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:26:29.037677  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:26:29.068883  675284 cri.go:89] found id: ""
	I1206 11:26:29.068909  675284 logs.go:282] 0 containers: []
	W1206 11:26:29.068918  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:26:29.068925  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:26:29.069042  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:26:29.094166  675284 cri.go:89] found id: ""
	I1206 11:26:29.094192  675284 logs.go:282] 0 containers: []
	W1206 11:26:29.094201  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:26:29.094207  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:26:29.094272  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:26:29.120221  675284 cri.go:89] found id: ""
	I1206 11:26:29.120247  675284 logs.go:282] 0 containers: []
	W1206 11:26:29.120255  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:26:29.120262  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:26:29.120320  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:26:29.145924  675284 cri.go:89] found id: ""
	I1206 11:26:29.145950  675284 logs.go:282] 0 containers: []
	W1206 11:26:29.145958  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:26:29.145968  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:26:29.145979  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:26:29.224250  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:26:29.224341  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:26:29.242789  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:26:29.242817  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:26:29.306672  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:26:29.306744  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:26:29.306774  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:26:29.337289  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:26:29.337323  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:26:31.867323  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:26:31.877657  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:26:31.877738  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:26:31.906108  675284 cri.go:89] found id: ""
	I1206 11:26:31.906132  675284 logs.go:282] 0 containers: []
	W1206 11:26:31.906140  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:26:31.906146  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:26:31.906207  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:26:31.933686  675284 cri.go:89] found id: ""
	I1206 11:26:31.933713  675284 logs.go:282] 0 containers: []
	W1206 11:26:31.933731  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:26:31.933737  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:26:31.933797  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:26:31.964009  675284 cri.go:89] found id: ""
	I1206 11:26:31.964035  675284 logs.go:282] 0 containers: []
	W1206 11:26:31.964045  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:26:31.964051  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:26:31.964109  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:26:31.990693  675284 cri.go:89] found id: ""
	I1206 11:26:31.990719  675284 logs.go:282] 0 containers: []
	W1206 11:26:31.990728  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:26:31.990734  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:26:31.990791  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:26:32.017851  675284 cri.go:89] found id: ""
	I1206 11:26:32.017878  675284 logs.go:282] 0 containers: []
	W1206 11:26:32.017887  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:26:32.017893  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:26:32.017960  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:26:32.045167  675284 cri.go:89] found id: ""
	I1206 11:26:32.045194  675284 logs.go:282] 0 containers: []
	W1206 11:26:32.045203  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:26:32.045210  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:26:32.045269  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:26:32.072285  675284 cri.go:89] found id: ""
	I1206 11:26:32.072311  675284 logs.go:282] 0 containers: []
	W1206 11:26:32.072321  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:26:32.072327  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:26:32.072389  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:26:32.101652  675284 cri.go:89] found id: ""
	I1206 11:26:32.101755  675284 logs.go:282] 0 containers: []
	W1206 11:26:32.101793  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:26:32.101821  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:26:32.101844  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:26:32.168939  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:26:32.168975  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:26:32.187619  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:26:32.187700  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:26:32.262130  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:26:32.262150  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:26:32.262174  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:26:32.293985  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:26:32.294021  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:26:34.822288  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:26:34.832744  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:26:34.832817  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:26:34.860643  675284 cri.go:89] found id: ""
	I1206 11:26:34.860669  675284 logs.go:282] 0 containers: []
	W1206 11:26:34.860678  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:26:34.860685  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:26:34.860745  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:26:34.887058  675284 cri.go:89] found id: ""
	I1206 11:26:34.887085  675284 logs.go:282] 0 containers: []
	W1206 11:26:34.887095  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:26:34.887102  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:26:34.887180  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:26:34.911877  675284 cri.go:89] found id: ""
	I1206 11:26:34.911907  675284 logs.go:282] 0 containers: []
	W1206 11:26:34.911916  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:26:34.911923  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:26:34.911988  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:26:34.942427  675284 cri.go:89] found id: ""
	I1206 11:26:34.942453  675284 logs.go:282] 0 containers: []
	W1206 11:26:34.942462  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:26:34.942469  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:26:34.942526  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:26:34.967910  675284 cri.go:89] found id: ""
	I1206 11:26:34.967936  675284 logs.go:282] 0 containers: []
	W1206 11:26:34.967946  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:26:34.967953  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:26:34.968013  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:26:34.993437  675284 cri.go:89] found id: ""
	I1206 11:26:34.993465  675284 logs.go:282] 0 containers: []
	W1206 11:26:34.993474  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:26:34.993481  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:26:34.993540  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:26:35.022193  675284 cri.go:89] found id: ""
	I1206 11:26:35.022220  675284 logs.go:282] 0 containers: []
	W1206 11:26:35.022230  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:26:35.022236  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:26:35.022306  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:26:35.053447  675284 cri.go:89] found id: ""
	I1206 11:26:35.053479  675284 logs.go:282] 0 containers: []
	W1206 11:26:35.053489  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:26:35.053498  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:26:35.053516  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:26:35.119861  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:26:35.119879  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:26:35.119891  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:26:35.151475  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:26:35.151510  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:26:35.188783  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:26:35.188862  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:26:35.267034  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:26:35.267073  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:26:37.783750  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:26:37.799952  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:26:37.800019  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:26:37.841498  675284 cri.go:89] found id: ""
	I1206 11:26:37.841519  675284 logs.go:282] 0 containers: []
	W1206 11:26:37.841527  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:26:37.841534  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:26:37.841589  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:26:37.877072  675284 cri.go:89] found id: ""
	I1206 11:26:37.877094  675284 logs.go:282] 0 containers: []
	W1206 11:26:37.877102  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:26:37.877111  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:26:37.877170  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:26:37.905288  675284 cri.go:89] found id: ""
	I1206 11:26:37.905309  675284 logs.go:282] 0 containers: []
	W1206 11:26:37.905318  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:26:37.905324  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:26:37.905389  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:26:37.945096  675284 cri.go:89] found id: ""
	I1206 11:26:37.945175  675284 logs.go:282] 0 containers: []
	W1206 11:26:37.945199  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:26:37.945222  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:26:37.945319  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:26:37.998156  675284 cri.go:89] found id: ""
	I1206 11:26:37.998186  675284 logs.go:282] 0 containers: []
	W1206 11:26:37.998195  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:26:37.998202  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:26:37.998279  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:26:38.044315  675284 cri.go:89] found id: ""
	I1206 11:26:38.044384  675284 logs.go:282] 0 containers: []
	W1206 11:26:38.044398  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:26:38.044406  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:26:38.044472  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:26:38.097778  675284 cri.go:89] found id: ""
	I1206 11:26:38.097806  675284 logs.go:282] 0 containers: []
	W1206 11:26:38.097817  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:26:38.097824  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:26:38.097883  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:26:38.133327  675284 cri.go:89] found id: ""
	I1206 11:26:38.133405  675284 logs.go:282] 0 containers: []
	W1206 11:26:38.133429  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:26:38.133451  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:26:38.133490  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:26:38.219846  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:26:38.219885  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:26:38.285689  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:26:38.285725  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:26:38.370677  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:26:38.370700  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:26:38.370714  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:26:38.406137  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:26:38.406181  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:26:40.951259  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:26:40.961101  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:26:40.961170  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:26:40.986880  675284 cri.go:89] found id: ""
	I1206 11:26:40.986905  675284 logs.go:282] 0 containers: []
	W1206 11:26:40.986914  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:26:40.986920  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:26:40.986978  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:26:41.015931  675284 cri.go:89] found id: ""
	I1206 11:26:41.015957  675284 logs.go:282] 0 containers: []
	W1206 11:26:41.015966  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:26:41.015973  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:26:41.016040  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:26:41.043663  675284 cri.go:89] found id: ""
	I1206 11:26:41.043686  675284 logs.go:282] 0 containers: []
	W1206 11:26:41.043695  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:26:41.043701  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:26:41.043762  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:26:41.073634  675284 cri.go:89] found id: ""
	I1206 11:26:41.073660  675284 logs.go:282] 0 containers: []
	W1206 11:26:41.073669  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:26:41.073675  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:26:41.073770  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:26:41.101810  675284 cri.go:89] found id: ""
	I1206 11:26:41.101837  675284 logs.go:282] 0 containers: []
	W1206 11:26:41.101846  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:26:41.101852  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:26:41.101918  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:26:41.144635  675284 cri.go:89] found id: ""
	I1206 11:26:41.144668  675284 logs.go:282] 0 containers: []
	W1206 11:26:41.144678  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:26:41.144684  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:26:41.144750  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:26:41.189707  675284 cri.go:89] found id: ""
	I1206 11:26:41.189748  675284 logs.go:282] 0 containers: []
	W1206 11:26:41.189757  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:26:41.189764  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:26:41.189825  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:26:41.226499  675284 cri.go:89] found id: ""
	I1206 11:26:41.226527  675284 logs.go:282] 0 containers: []
	W1206 11:26:41.226536  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:26:41.226545  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:26:41.226556  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:26:41.265361  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:26:41.265396  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:26:41.297840  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:26:41.297870  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:26:41.367395  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:26:41.367438  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:26:41.387984  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:26:41.388015  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:26:41.476473  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:26:43.976692  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:26:43.991913  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:26:43.992001  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:26:44.054134  675284 cri.go:89] found id: ""
	I1206 11:26:44.054164  675284 logs.go:282] 0 containers: []
	W1206 11:26:44.054178  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:26:44.054186  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:26:44.054249  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:26:44.107610  675284 cri.go:89] found id: ""
	I1206 11:26:44.107637  675284 logs.go:282] 0 containers: []
	W1206 11:26:44.107646  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:26:44.107653  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:26:44.107716  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:26:44.174481  675284 cri.go:89] found id: ""
	I1206 11:26:44.174508  675284 logs.go:282] 0 containers: []
	W1206 11:26:44.174517  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:26:44.174526  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:26:44.174584  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:26:44.239893  675284 cri.go:89] found id: ""
	I1206 11:26:44.239919  675284 logs.go:282] 0 containers: []
	W1206 11:26:44.239928  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:26:44.239934  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:26:44.239994  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:26:44.310580  675284 cri.go:89] found id: ""
	I1206 11:26:44.310608  675284 logs.go:282] 0 containers: []
	W1206 11:26:44.310617  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:26:44.310624  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:26:44.310681  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:26:44.362176  675284 cri.go:89] found id: ""
	I1206 11:26:44.362204  675284 logs.go:282] 0 containers: []
	W1206 11:26:44.362213  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:26:44.362219  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:26:44.362278  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:26:44.420426  675284 cri.go:89] found id: ""
	I1206 11:26:44.420465  675284 logs.go:282] 0 containers: []
	W1206 11:26:44.420474  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:26:44.420480  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:26:44.420550  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:26:44.461727  675284 cri.go:89] found id: ""
	I1206 11:26:44.461770  675284 logs.go:282] 0 containers: []
	W1206 11:26:44.461780  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:26:44.461788  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:26:44.461804  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:26:44.553197  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:26:44.553237  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:26:44.589299  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:26:44.589337  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:26:44.703074  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:26:44.703097  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:26:44.703114  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:26:44.759668  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:26:44.759704  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:26:47.314782  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:26:47.331781  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:26:47.331859  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:26:47.379217  675284 cri.go:89] found id: ""
	I1206 11:26:47.379250  675284 logs.go:282] 0 containers: []
	W1206 11:26:47.379259  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:26:47.379266  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:26:47.379327  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:26:47.423044  675284 cri.go:89] found id: ""
	I1206 11:26:47.423070  675284 logs.go:282] 0 containers: []
	W1206 11:26:47.423079  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:26:47.423085  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:26:47.423194  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:26:47.502018  675284 cri.go:89] found id: ""
	I1206 11:26:47.502046  675284 logs.go:282] 0 containers: []
	W1206 11:26:47.502055  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:26:47.502061  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:26:47.502119  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:26:47.550045  675284 cri.go:89] found id: ""
	I1206 11:26:47.550072  675284 logs.go:282] 0 containers: []
	W1206 11:26:47.550081  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:26:47.550088  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:26:47.550144  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:26:47.591768  675284 cri.go:89] found id: ""
	I1206 11:26:47.591797  675284 logs.go:282] 0 containers: []
	W1206 11:26:47.591806  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:26:47.591812  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:26:47.591871  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:26:47.639453  675284 cri.go:89] found id: ""
	I1206 11:26:47.639480  675284 logs.go:282] 0 containers: []
	W1206 11:26:47.639488  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:26:47.639495  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:26:47.639552  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:26:47.684539  675284 cri.go:89] found id: ""
	I1206 11:26:47.684567  675284 logs.go:282] 0 containers: []
	W1206 11:26:47.684575  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:26:47.684586  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:26:47.684645  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:26:47.741779  675284 cri.go:89] found id: ""
	I1206 11:26:47.741807  675284 logs.go:282] 0 containers: []
	W1206 11:26:47.741822  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:26:47.741832  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:26:47.741843  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:26:47.809678  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:26:47.809768  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:26:47.896040  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:26:47.896077  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:26:47.918791  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:26:47.918827  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:26:48.048142  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:26:48.048166  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:26:48.048181  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:26:50.602342  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:26:50.612277  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:26:50.612356  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:26:50.638603  675284 cri.go:89] found id: ""
	I1206 11:26:50.638626  675284 logs.go:282] 0 containers: []
	W1206 11:26:50.638634  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:26:50.638640  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:26:50.638700  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:26:50.664536  675284 cri.go:89] found id: ""
	I1206 11:26:50.664563  675284 logs.go:282] 0 containers: []
	W1206 11:26:50.664572  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:26:50.664578  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:26:50.664636  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:26:50.692135  675284 cri.go:89] found id: ""
	I1206 11:26:50.692162  675284 logs.go:282] 0 containers: []
	W1206 11:26:50.692172  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:26:50.692178  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:26:50.692234  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:26:50.717350  675284 cri.go:89] found id: ""
	I1206 11:26:50.717373  675284 logs.go:282] 0 containers: []
	W1206 11:26:50.717382  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:26:50.717388  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:26:50.717454  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:26:50.746453  675284 cri.go:89] found id: ""
	I1206 11:26:50.746475  675284 logs.go:282] 0 containers: []
	W1206 11:26:50.746483  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:26:50.746490  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:26:50.746547  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:26:50.770866  675284 cri.go:89] found id: ""
	I1206 11:26:50.770893  675284 logs.go:282] 0 containers: []
	W1206 11:26:50.770902  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:26:50.770909  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:26:50.770966  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:26:50.795817  675284 cri.go:89] found id: ""
	I1206 11:26:50.795840  675284 logs.go:282] 0 containers: []
	W1206 11:26:50.795849  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:26:50.795855  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:26:50.795920  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:26:50.822614  675284 cri.go:89] found id: ""
	I1206 11:26:50.822637  675284 logs.go:282] 0 containers: []
	W1206 11:26:50.822646  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:26:50.822655  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:26:50.822666  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:26:50.889360  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:26:50.889398  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:26:50.905891  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:26:50.905966  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:26:50.968740  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:26:50.968762  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:26:50.968775  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:26:50.998748  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:26:50.998787  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:26:53.530272  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:26:53.540383  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:26:53.540447  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:26:53.569658  675284 cri.go:89] found id: ""
	I1206 11:26:53.569677  675284 logs.go:282] 0 containers: []
	W1206 11:26:53.569684  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:26:53.569690  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:26:53.569745  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:26:53.597967  675284 cri.go:89] found id: ""
	I1206 11:26:53.597995  675284 logs.go:282] 0 containers: []
	W1206 11:26:53.598004  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:26:53.598011  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:26:53.598067  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:26:53.631778  675284 cri.go:89] found id: ""
	I1206 11:26:53.631797  675284 logs.go:282] 0 containers: []
	W1206 11:26:53.631806  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:26:53.631812  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:26:53.631870  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:26:53.665703  675284 cri.go:89] found id: ""
	I1206 11:26:53.665724  675284 logs.go:282] 0 containers: []
	W1206 11:26:53.665732  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:26:53.665738  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:26:53.665811  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:26:53.697089  675284 cri.go:89] found id: ""
	I1206 11:26:53.697115  675284 logs.go:282] 0 containers: []
	W1206 11:26:53.697124  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:26:53.697130  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:26:53.697186  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:26:53.729507  675284 cri.go:89] found id: ""
	I1206 11:26:53.729524  675284 logs.go:282] 0 containers: []
	W1206 11:26:53.729535  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:26:53.729542  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:26:53.729597  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:26:53.762634  675284 cri.go:89] found id: ""
	I1206 11:26:53.762653  675284 logs.go:282] 0 containers: []
	W1206 11:26:53.762661  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:26:53.762667  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:26:53.762724  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:26:53.801873  675284 cri.go:89] found id: ""
	I1206 11:26:53.801897  675284 logs.go:282] 0 containers: []
	W1206 11:26:53.801906  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:26:53.801916  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:26:53.801928  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:26:53.821213  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:26:53.821235  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:26:53.913552  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:26:53.913575  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:26:53.913592  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:26:53.945933  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:26:53.945971  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:26:53.981500  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:26:53.981520  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:26:56.580241  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:26:56.594159  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:26:56.594258  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:26:56.635326  675284 cri.go:89] found id: ""
	I1206 11:26:56.635350  675284 logs.go:282] 0 containers: []
	W1206 11:26:56.635420  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:26:56.635429  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:26:56.635488  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:26:56.665272  675284 cri.go:89] found id: ""
	I1206 11:26:56.665293  675284 logs.go:282] 0 containers: []
	W1206 11:26:56.665300  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:26:56.665307  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:26:56.665366  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:26:56.694606  675284 cri.go:89] found id: ""
	I1206 11:26:56.694628  675284 logs.go:282] 0 containers: []
	W1206 11:26:56.694636  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:26:56.694642  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:26:56.694702  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:26:56.724038  675284 cri.go:89] found id: ""
	I1206 11:26:56.724059  675284 logs.go:282] 0 containers: []
	W1206 11:26:56.724067  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:26:56.724074  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:26:56.724129  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:26:56.759866  675284 cri.go:89] found id: ""
	I1206 11:26:56.759888  675284 logs.go:282] 0 containers: []
	W1206 11:26:56.759897  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:26:56.759903  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:26:56.759962  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:26:56.804868  675284 cri.go:89] found id: ""
	I1206 11:26:56.804889  675284 logs.go:282] 0 containers: []
	W1206 11:26:56.804897  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:26:56.804905  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:26:56.804962  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:26:56.840119  675284 cri.go:89] found id: ""
	I1206 11:26:56.840140  675284 logs.go:282] 0 containers: []
	W1206 11:26:56.840149  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:26:56.840155  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:26:56.840216  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:26:56.867988  675284 cri.go:89] found id: ""
	I1206 11:26:56.868010  675284 logs.go:282] 0 containers: []
	W1206 11:26:56.868018  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:26:56.868028  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:26:56.868039  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:26:56.952153  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:26:56.952233  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:26:56.972773  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:26:56.972797  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:26:57.056628  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:26:57.056696  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:26:57.056724  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:26:57.109837  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:26:57.109953  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:26:59.651813  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:26:59.661846  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:26:59.661931  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:26:59.701834  675284 cri.go:89] found id: ""
	I1206 11:26:59.701858  675284 logs.go:282] 0 containers: []
	W1206 11:26:59.701865  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:26:59.701872  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:26:59.701934  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:26:59.732031  675284 cri.go:89] found id: ""
	I1206 11:26:59.732052  675284 logs.go:282] 0 containers: []
	W1206 11:26:59.732060  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:26:59.732067  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:26:59.732132  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:26:59.764376  675284 cri.go:89] found id: ""
	I1206 11:26:59.764405  675284 logs.go:282] 0 containers: []
	W1206 11:26:59.764414  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:26:59.764429  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:26:59.764490  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:26:59.801758  675284 cri.go:89] found id: ""
	I1206 11:26:59.801792  675284 logs.go:282] 0 containers: []
	W1206 11:26:59.801801  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:26:59.801807  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:26:59.801870  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:26:59.829851  675284 cri.go:89] found id: ""
	I1206 11:26:59.829873  675284 logs.go:282] 0 containers: []
	W1206 11:26:59.829882  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:26:59.829888  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:26:59.829947  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:26:59.861777  675284 cri.go:89] found id: ""
	I1206 11:26:59.861800  675284 logs.go:282] 0 containers: []
	W1206 11:26:59.861809  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:26:59.861816  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:26:59.861875  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:26:59.887658  675284 cri.go:89] found id: ""
	I1206 11:26:59.887681  675284 logs.go:282] 0 containers: []
	W1206 11:26:59.887689  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:26:59.887695  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:26:59.887770  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:26:59.912688  675284 cri.go:89] found id: ""
	I1206 11:26:59.912710  675284 logs.go:282] 0 containers: []
	W1206 11:26:59.912719  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:26:59.912728  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:26:59.912740  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:26:59.957854  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:26:59.957892  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:27:00.020565  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:27:00.020621  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:27:00.257399  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:27:00.257439  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:27:00.329614  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:27:00.329651  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:27:00.453905  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:27:02.954149  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:27:02.965635  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:27:02.965704  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:27:03.040311  675284 cri.go:89] found id: ""
	I1206 11:27:03.040358  675284 logs.go:282] 0 containers: []
	W1206 11:27:03.040367  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:27:03.040375  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:27:03.040435  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:27:03.085128  675284 cri.go:89] found id: ""
	I1206 11:27:03.085157  675284 logs.go:282] 0 containers: []
	W1206 11:27:03.085167  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:27:03.085173  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:27:03.085278  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:27:03.124432  675284 cri.go:89] found id: ""
	I1206 11:27:03.124458  675284 logs.go:282] 0 containers: []
	W1206 11:27:03.124466  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:27:03.124472  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:27:03.124536  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:27:03.164373  675284 cri.go:89] found id: ""
	I1206 11:27:03.164398  675284 logs.go:282] 0 containers: []
	W1206 11:27:03.164407  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:27:03.164413  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:27:03.164470  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:27:03.199960  675284 cri.go:89] found id: ""
	I1206 11:27:03.199986  675284 logs.go:282] 0 containers: []
	W1206 11:27:03.199995  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:27:03.200001  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:27:03.200060  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:27:03.234817  675284 cri.go:89] found id: ""
	I1206 11:27:03.234845  675284 logs.go:282] 0 containers: []
	W1206 11:27:03.234854  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:27:03.234862  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:27:03.234924  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:27:03.263948  675284 cri.go:89] found id: ""
	I1206 11:27:03.263975  675284 logs.go:282] 0 containers: []
	W1206 11:27:03.263984  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:27:03.263990  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:27:03.264047  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:27:03.293398  675284 cri.go:89] found id: ""
	I1206 11:27:03.293420  675284 logs.go:282] 0 containers: []
	W1206 11:27:03.293429  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:27:03.293439  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:27:03.293451  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:27:03.327945  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:27:03.327974  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:27:03.404659  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:27:03.404697  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:27:03.422139  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:27:03.422167  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:27:03.553285  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:27:03.553305  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:27:03.553318  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:27:06.091081  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:27:06.107546  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:27:06.107618  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:27:06.138591  675284 cri.go:89] found id: ""
	I1206 11:27:06.138617  675284 logs.go:282] 0 containers: []
	W1206 11:27:06.138625  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:27:06.138632  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:27:06.138692  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:27:06.180334  675284 cri.go:89] found id: ""
	I1206 11:27:06.180361  675284 logs.go:282] 0 containers: []
	W1206 11:27:06.180369  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:27:06.180376  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:27:06.180440  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:27:06.224596  675284 cri.go:89] found id: ""
	I1206 11:27:06.224622  675284 logs.go:282] 0 containers: []
	W1206 11:27:06.224630  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:27:06.224636  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:27:06.224696  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:27:06.261710  675284 cri.go:89] found id: ""
	I1206 11:27:06.261735  675284 logs.go:282] 0 containers: []
	W1206 11:27:06.261744  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:27:06.261751  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:27:06.261831  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:27:06.302181  675284 cri.go:89] found id: ""
	I1206 11:27:06.302204  675284 logs.go:282] 0 containers: []
	W1206 11:27:06.302213  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:27:06.302219  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:27:06.302281  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:27:06.344443  675284 cri.go:89] found id: ""
	I1206 11:27:06.344475  675284 logs.go:282] 0 containers: []
	W1206 11:27:06.344484  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:27:06.344491  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:27:06.344551  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:27:06.383097  675284 cri.go:89] found id: ""
	I1206 11:27:06.383163  675284 logs.go:282] 0 containers: []
	W1206 11:27:06.383173  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:27:06.383180  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:27:06.383245  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:27:06.450318  675284 cri.go:89] found id: ""
	I1206 11:27:06.450349  675284 logs.go:282] 0 containers: []
	W1206 11:27:06.450426  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:27:06.450443  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:27:06.450454  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:27:06.549464  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:27:06.549500  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:27:06.570351  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:27:06.570383  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:27:06.667879  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:27:06.667901  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:27:06.667913  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:27:06.725517  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:27:06.725558  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:27:09.295618  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:27:09.311497  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:27:09.311574  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:27:09.354557  675284 cri.go:89] found id: ""
	I1206 11:27:09.354583  675284 logs.go:282] 0 containers: []
	W1206 11:27:09.354592  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:27:09.354599  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:27:09.354658  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:27:09.385530  675284 cri.go:89] found id: ""
	I1206 11:27:09.385562  675284 logs.go:282] 0 containers: []
	W1206 11:27:09.385571  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:27:09.385577  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:27:09.385636  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:27:09.422308  675284 cri.go:89] found id: ""
	I1206 11:27:09.422344  675284 logs.go:282] 0 containers: []
	W1206 11:27:09.422354  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:27:09.422360  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:27:09.422416  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:27:09.459209  675284 cri.go:89] found id: ""
	I1206 11:27:09.459232  675284 logs.go:282] 0 containers: []
	W1206 11:27:09.459241  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:27:09.459247  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:27:09.459312  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:27:09.493052  675284 cri.go:89] found id: ""
	I1206 11:27:09.493090  675284 logs.go:282] 0 containers: []
	W1206 11:27:09.493099  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:27:09.493106  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:27:09.493172  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:27:09.526465  675284 cri.go:89] found id: ""
	I1206 11:27:09.526491  675284 logs.go:282] 0 containers: []
	W1206 11:27:09.526499  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:27:09.526506  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:27:09.526562  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:27:09.560715  675284 cri.go:89] found id: ""
	I1206 11:27:09.560741  675284 logs.go:282] 0 containers: []
	W1206 11:27:09.560750  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:27:09.560756  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:27:09.560816  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:27:09.598993  675284 cri.go:89] found id: ""
	I1206 11:27:09.599017  675284 logs.go:282] 0 containers: []
	W1206 11:27:09.599027  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:27:09.599037  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:27:09.599048  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:27:09.688859  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:27:09.688989  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:27:09.708851  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:27:09.708930  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:27:09.841633  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:27:09.841709  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:27:09.841740  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:27:09.876681  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:27:09.876717  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:27:12.419702  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:27:12.432416  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:27:12.432486  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:27:12.489837  675284 cri.go:89] found id: ""
	I1206 11:27:12.489866  675284 logs.go:282] 0 containers: []
	W1206 11:27:12.489876  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:27:12.489882  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:27:12.489945  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:27:12.522494  675284 cri.go:89] found id: ""
	I1206 11:27:12.522522  675284 logs.go:282] 0 containers: []
	W1206 11:27:12.522531  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:27:12.522537  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:27:12.522599  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:27:12.555604  675284 cri.go:89] found id: ""
	I1206 11:27:12.555631  675284 logs.go:282] 0 containers: []
	W1206 11:27:12.555641  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:27:12.555648  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:27:12.555712  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:27:12.592530  675284 cri.go:89] found id: ""
	I1206 11:27:12.592558  675284 logs.go:282] 0 containers: []
	W1206 11:27:12.592567  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:27:12.592574  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:27:12.592633  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:27:12.625004  675284 cri.go:89] found id: ""
	I1206 11:27:12.625031  675284 logs.go:282] 0 containers: []
	W1206 11:27:12.625041  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:27:12.625047  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:27:12.625109  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:27:12.656378  675284 cri.go:89] found id: ""
	I1206 11:27:12.656461  675284 logs.go:282] 0 containers: []
	W1206 11:27:12.656484  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:27:12.656524  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:27:12.656622  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:27:12.705515  675284 cri.go:89] found id: ""
	I1206 11:27:12.705589  675284 logs.go:282] 0 containers: []
	W1206 11:27:12.705613  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:27:12.705630  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:27:12.705719  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:27:12.751444  675284 cri.go:89] found id: ""
	I1206 11:27:12.751525  675284 logs.go:282] 0 containers: []
	W1206 11:27:12.751549  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:27:12.751571  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:27:12.751611  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:27:12.854885  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:27:12.854957  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:27:12.854985  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:27:12.892661  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:27:12.892696  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:27:12.930905  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:27:12.930930  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:27:13.006381  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:27:13.006473  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:27:15.528723  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:27:15.540612  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:27:15.540680  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:27:15.569973  675284 cri.go:89] found id: ""
	I1206 11:27:15.569996  675284 logs.go:282] 0 containers: []
	W1206 11:27:15.570005  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:27:15.570011  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:27:15.570073  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:27:15.601275  675284 cri.go:89] found id: ""
	I1206 11:27:15.601303  675284 logs.go:282] 0 containers: []
	W1206 11:27:15.601311  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:27:15.601318  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:27:15.601377  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:27:15.638462  675284 cri.go:89] found id: ""
	I1206 11:27:15.638535  675284 logs.go:282] 0 containers: []
	W1206 11:27:15.638565  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:27:15.638584  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:27:15.638691  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:27:15.669046  675284 cri.go:89] found id: ""
	I1206 11:27:15.669072  675284 logs.go:282] 0 containers: []
	W1206 11:27:15.669080  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:27:15.669086  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:27:15.669146  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:27:15.722519  675284 cri.go:89] found id: ""
	I1206 11:27:15.722547  675284 logs.go:282] 0 containers: []
	W1206 11:27:15.722557  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:27:15.722563  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:27:15.722627  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:27:15.755293  675284 cri.go:89] found id: ""
	I1206 11:27:15.755320  675284 logs.go:282] 0 containers: []
	W1206 11:27:15.755329  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:27:15.755335  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:27:15.755391  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:27:15.794656  675284 cri.go:89] found id: ""
	I1206 11:27:15.794683  675284 logs.go:282] 0 containers: []
	W1206 11:27:15.794692  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:27:15.794698  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:27:15.794769  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:27:15.823116  675284 cri.go:89] found id: ""
	I1206 11:27:15.823178  675284 logs.go:282] 0 containers: []
	W1206 11:27:15.823187  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:27:15.823197  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:27:15.823213  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:27:15.840691  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:27:15.840722  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:27:15.923430  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:27:15.923463  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:27:15.923476  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:27:15.970715  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:27:15.970756  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:27:16.035118  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:27:16.035162  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:27:18.639253  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:27:18.650918  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:27:18.650990  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:27:18.754748  675284 cri.go:89] found id: ""
	I1206 11:27:18.754771  675284 logs.go:282] 0 containers: []
	W1206 11:27:18.754779  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:27:18.754786  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:27:18.754842  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:27:18.787673  675284 cri.go:89] found id: ""
	I1206 11:27:18.787705  675284 logs.go:282] 0 containers: []
	W1206 11:27:18.787714  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:27:18.787720  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:27:18.787779  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:27:18.815951  675284 cri.go:89] found id: ""
	I1206 11:27:18.815978  675284 logs.go:282] 0 containers: []
	W1206 11:27:18.815987  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:27:18.815994  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:27:18.816051  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:27:18.850851  675284 cri.go:89] found id: ""
	I1206 11:27:18.850877  675284 logs.go:282] 0 containers: []
	W1206 11:27:18.850886  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:27:18.850892  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:27:18.850950  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:27:18.886316  675284 cri.go:89] found id: ""
	I1206 11:27:18.886342  675284 logs.go:282] 0 containers: []
	W1206 11:27:18.886351  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:27:18.886357  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:27:18.886420  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:27:18.926614  675284 cri.go:89] found id: ""
	I1206 11:27:18.926641  675284 logs.go:282] 0 containers: []
	W1206 11:27:18.926650  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:27:18.926658  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:27:18.926717  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:27:18.958898  675284 cri.go:89] found id: ""
	I1206 11:27:18.958923  675284 logs.go:282] 0 containers: []
	W1206 11:27:18.958932  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:27:18.958939  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:27:18.959000  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:27:18.995014  675284 cri.go:89] found id: ""
	I1206 11:27:18.995039  675284 logs.go:282] 0 containers: []
	W1206 11:27:18.995048  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:27:18.995057  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:27:18.995072  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:27:19.082085  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:27:19.082126  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:27:19.105311  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:27:19.105343  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:27:19.261882  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:27:19.261904  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:27:19.261919  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:27:19.320326  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:27:19.320366  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:27:21.864204  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:27:21.874628  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:27:21.874707  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:27:21.905720  675284 cri.go:89] found id: ""
	I1206 11:27:21.905749  675284 logs.go:282] 0 containers: []
	W1206 11:27:21.905758  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:27:21.905811  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:27:21.905882  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:27:21.932482  675284 cri.go:89] found id: ""
	I1206 11:27:21.932510  675284 logs.go:282] 0 containers: []
	W1206 11:27:21.932519  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:27:21.932526  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:27:21.932588  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:27:21.958734  675284 cri.go:89] found id: ""
	I1206 11:27:21.958759  675284 logs.go:282] 0 containers: []
	W1206 11:27:21.958768  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:27:21.958774  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:27:21.958836  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:27:21.985769  675284 cri.go:89] found id: ""
	I1206 11:27:21.985797  675284 logs.go:282] 0 containers: []
	W1206 11:27:21.985813  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:27:21.985819  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:27:21.985883  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:27:22.014362  675284 cri.go:89] found id: ""
	I1206 11:27:22.014400  675284 logs.go:282] 0 containers: []
	W1206 11:27:22.014410  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:27:22.014418  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:27:22.014487  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:27:22.041609  675284 cri.go:89] found id: ""
	I1206 11:27:22.041633  675284 logs.go:282] 0 containers: []
	W1206 11:27:22.041641  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:27:22.041647  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:27:22.041707  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:27:22.072530  675284 cri.go:89] found id: ""
	I1206 11:27:22.072555  675284 logs.go:282] 0 containers: []
	W1206 11:27:22.072565  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:27:22.072573  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:27:22.072637  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:27:22.106946  675284 cri.go:89] found id: ""
	I1206 11:27:22.106968  675284 logs.go:282] 0 containers: []
	W1206 11:27:22.106976  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:27:22.106985  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:27:22.106996  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:27:22.226832  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:27:22.226881  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:27:22.259529  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:27:22.259603  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:27:22.364274  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:27:22.364294  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:27:22.364310  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:27:22.396847  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:27:22.396886  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:27:24.931196  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:27:24.944737  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:27:24.944811  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:27:24.977656  675284 cri.go:89] found id: ""
	I1206 11:27:24.977678  675284 logs.go:282] 0 containers: []
	W1206 11:27:24.977686  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:27:24.977692  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:27:24.977750  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:27:25.010109  675284 cri.go:89] found id: ""
	I1206 11:27:25.010132  675284 logs.go:282] 0 containers: []
	W1206 11:27:25.010141  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:27:25.010149  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:27:25.010219  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:27:25.049864  675284 cri.go:89] found id: ""
	I1206 11:27:25.049886  675284 logs.go:282] 0 containers: []
	W1206 11:27:25.049894  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:27:25.049900  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:27:25.049958  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:27:25.081461  675284 cri.go:89] found id: ""
	I1206 11:27:25.081483  675284 logs.go:282] 0 containers: []
	W1206 11:27:25.081491  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:27:25.081497  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:27:25.081560  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:27:25.116494  675284 cri.go:89] found id: ""
	I1206 11:27:25.116516  675284 logs.go:282] 0 containers: []
	W1206 11:27:25.116525  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:27:25.116532  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:27:25.116597  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:27:25.157718  675284 cri.go:89] found id: ""
	I1206 11:27:25.157740  675284 logs.go:282] 0 containers: []
	W1206 11:27:25.157749  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:27:25.157757  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:27:25.157831  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:27:25.206531  675284 cri.go:89] found id: ""
	I1206 11:27:25.206553  675284 logs.go:282] 0 containers: []
	W1206 11:27:25.206561  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:27:25.206568  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:27:25.206626  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:27:25.275665  675284 cri.go:89] found id: ""
	I1206 11:27:25.275686  675284 logs.go:282] 0 containers: []
	W1206 11:27:25.275697  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:27:25.275706  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:27:25.275719  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:27:25.292756  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:27:25.292790  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:27:25.385290  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:27:25.385307  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:27:25.385320  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:27:25.423628  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:27:25.423660  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:27:25.462459  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:27:25.462483  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:27:28.039246  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:27:28.050114  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:27:28.050186  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:27:28.094686  675284 cri.go:89] found id: ""
	I1206 11:27:28.094709  675284 logs.go:282] 0 containers: []
	W1206 11:27:28.094717  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:27:28.094724  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:27:28.094783  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:27:28.147893  675284 cri.go:89] found id: ""
	I1206 11:27:28.147929  675284 logs.go:282] 0 containers: []
	W1206 11:27:28.147939  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:27:28.147946  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:27:28.148004  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:27:28.180489  675284 cri.go:89] found id: ""
	I1206 11:27:28.180512  675284 logs.go:282] 0 containers: []
	W1206 11:27:28.180521  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:27:28.180527  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:27:28.180587  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:27:28.268927  675284 cri.go:89] found id: ""
	I1206 11:27:28.268950  675284 logs.go:282] 0 containers: []
	W1206 11:27:28.268959  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:27:28.268966  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:27:28.269023  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:27:28.308631  675284 cri.go:89] found id: ""
	I1206 11:27:28.308653  675284 logs.go:282] 0 containers: []
	W1206 11:27:28.308661  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:27:28.308667  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:27:28.308725  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:27:28.337884  675284 cri.go:89] found id: ""
	I1206 11:27:28.337906  675284 logs.go:282] 0 containers: []
	W1206 11:27:28.337915  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:27:28.337922  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:27:28.337981  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:27:28.367782  675284 cri.go:89] found id: ""
	I1206 11:27:28.367858  675284 logs.go:282] 0 containers: []
	W1206 11:27:28.367881  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:27:28.367900  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:27:28.367989  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:27:28.400136  675284 cri.go:89] found id: ""
	I1206 11:27:28.400204  675284 logs.go:282] 0 containers: []
	W1206 11:27:28.400229  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:27:28.400249  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:27:28.400285  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:27:28.423280  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:27:28.423352  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:27:28.516519  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:27:28.516581  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:27:28.516608  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:27:28.549242  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:27:28.549287  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:27:28.586987  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:27:28.587012  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:27:31.163269  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:27:31.175854  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:27:31.175923  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:27:31.216860  675284 cri.go:89] found id: ""
	I1206 11:27:31.216886  675284 logs.go:282] 0 containers: []
	W1206 11:27:31.216894  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:27:31.216901  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:27:31.216958  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:27:31.253710  675284 cri.go:89] found id: ""
	I1206 11:27:31.253737  675284 logs.go:282] 0 containers: []
	W1206 11:27:31.253746  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:27:31.253753  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:27:31.253811  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:27:31.294982  675284 cri.go:89] found id: ""
	I1206 11:27:31.295010  675284 logs.go:282] 0 containers: []
	W1206 11:27:31.295019  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:27:31.295025  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:27:31.295081  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:27:31.332917  675284 cri.go:89] found id: ""
	I1206 11:27:31.332938  675284 logs.go:282] 0 containers: []
	W1206 11:27:31.332946  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:27:31.332953  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:27:31.333010  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:27:31.381647  675284 cri.go:89] found id: ""
	I1206 11:27:31.381674  675284 logs.go:282] 0 containers: []
	W1206 11:27:31.381683  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:27:31.381690  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:27:31.381751  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:27:31.408322  675284 cri.go:89] found id: ""
	I1206 11:27:31.408349  675284 logs.go:282] 0 containers: []
	W1206 11:27:31.408361  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:27:31.408368  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:27:31.408427  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:27:31.447054  675284 cri.go:89] found id: ""
	I1206 11:27:31.447099  675284 logs.go:282] 0 containers: []
	W1206 11:27:31.447108  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:27:31.447115  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:27:31.447236  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:27:31.481274  675284 cri.go:89] found id: ""
	I1206 11:27:31.481299  675284 logs.go:282] 0 containers: []
	W1206 11:27:31.481308  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:27:31.481317  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:27:31.481328  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:27:31.556351  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:27:31.556434  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:27:31.573731  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:27:31.573761  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:27:31.672514  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:27:31.672581  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:27:31.672608  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:27:31.706644  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:27:31.706682  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:27:34.236947  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:27:34.251689  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:27:34.251768  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:27:34.292015  675284 cri.go:89] found id: ""
	I1206 11:27:34.292039  675284 logs.go:282] 0 containers: []
	W1206 11:27:34.292047  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:27:34.292054  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:27:34.292111  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:27:34.337996  675284 cri.go:89] found id: ""
	I1206 11:27:34.338020  675284 logs.go:282] 0 containers: []
	W1206 11:27:34.338030  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:27:34.338036  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:27:34.338095  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:27:34.369126  675284 cri.go:89] found id: ""
	I1206 11:27:34.369150  675284 logs.go:282] 0 containers: []
	W1206 11:27:34.369159  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:27:34.369165  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:27:34.369225  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:27:34.404662  675284 cri.go:89] found id: ""
	I1206 11:27:34.404685  675284 logs.go:282] 0 containers: []
	W1206 11:27:34.404700  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:27:34.404707  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:27:34.404769  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:27:34.444255  675284 cri.go:89] found id: ""
	I1206 11:27:34.444277  675284 logs.go:282] 0 containers: []
	W1206 11:27:34.444287  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:27:34.444293  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:27:34.444352  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:27:34.481410  675284 cri.go:89] found id: ""
	I1206 11:27:34.481431  675284 logs.go:282] 0 containers: []
	W1206 11:27:34.481439  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:27:34.481446  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:27:34.481508  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:27:34.513392  675284 cri.go:89] found id: ""
	I1206 11:27:34.513415  675284 logs.go:282] 0 containers: []
	W1206 11:27:34.513423  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:27:34.513430  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:27:34.513489  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:27:34.545808  675284 cri.go:89] found id: ""
	I1206 11:27:34.545837  675284 logs.go:282] 0 containers: []
	W1206 11:27:34.545846  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:27:34.545855  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:27:34.545872  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:27:34.641287  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:27:34.641305  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:27:34.641317  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:27:34.677688  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:27:34.677725  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:27:34.725613  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:27:34.725643  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:27:34.806561  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:27:34.806604  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:27:37.349933  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:27:37.361420  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:27:37.361485  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:27:37.397460  675284 cri.go:89] found id: ""
	I1206 11:27:37.397483  675284 logs.go:282] 0 containers: []
	W1206 11:27:37.397492  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:27:37.397498  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:27:37.397577  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:27:37.429568  675284 cri.go:89] found id: ""
	I1206 11:27:37.429594  675284 logs.go:282] 0 containers: []
	W1206 11:27:37.429602  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:27:37.429608  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:27:37.429670  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:27:37.463827  675284 cri.go:89] found id: ""
	I1206 11:27:37.463903  675284 logs.go:282] 0 containers: []
	W1206 11:27:37.463927  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:27:37.463944  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:27:37.464026  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:27:37.500544  675284 cri.go:89] found id: ""
	I1206 11:27:37.500565  675284 logs.go:282] 0 containers: []
	W1206 11:27:37.500574  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:27:37.500580  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:27:37.500637  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:27:37.528199  675284 cri.go:89] found id: ""
	I1206 11:27:37.528220  675284 logs.go:282] 0 containers: []
	W1206 11:27:37.528228  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:27:37.528234  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:27:37.528291  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:27:37.557527  675284 cri.go:89] found id: ""
	I1206 11:27:37.557550  675284 logs.go:282] 0 containers: []
	W1206 11:27:37.557558  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:27:37.557564  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:27:37.557618  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:27:37.601999  675284 cri.go:89] found id: ""
	I1206 11:27:37.602027  675284 logs.go:282] 0 containers: []
	W1206 11:27:37.602043  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:27:37.602049  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:27:37.602113  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:27:37.642590  675284 cri.go:89] found id: ""
	I1206 11:27:37.642617  675284 logs.go:282] 0 containers: []
	W1206 11:27:37.642626  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:27:37.642635  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:27:37.642646  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:27:37.724613  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:27:37.724663  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:27:37.743656  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:27:37.743685  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:27:37.846192  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:27:37.846216  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:27:37.846229  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:27:37.878794  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:27:37.878826  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:27:40.411749  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:27:40.422901  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:27:40.422996  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:27:40.472749  675284 cri.go:89] found id: ""
	I1206 11:27:40.472822  675284 logs.go:282] 0 containers: []
	W1206 11:27:40.472844  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:27:40.472864  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:27:40.472947  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:27:40.505432  675284 cri.go:89] found id: ""
	I1206 11:27:40.505454  675284 logs.go:282] 0 containers: []
	W1206 11:27:40.505463  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:27:40.505469  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:27:40.505533  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:27:40.534686  675284 cri.go:89] found id: ""
	I1206 11:27:40.534707  675284 logs.go:282] 0 containers: []
	W1206 11:27:40.534716  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:27:40.534722  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:27:40.534779  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:27:40.568510  675284 cri.go:89] found id: ""
	I1206 11:27:40.568532  675284 logs.go:282] 0 containers: []
	W1206 11:27:40.568540  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:27:40.568546  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:27:40.568608  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:27:40.603321  675284 cri.go:89] found id: ""
	I1206 11:27:40.603350  675284 logs.go:282] 0 containers: []
	W1206 11:27:40.603359  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:27:40.603366  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:27:40.603426  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:27:40.641602  675284 cri.go:89] found id: ""
	I1206 11:27:40.641634  675284 logs.go:282] 0 containers: []
	W1206 11:27:40.641642  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:27:40.641649  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:27:40.641708  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:27:40.674064  675284 cri.go:89] found id: ""
	I1206 11:27:40.674101  675284 logs.go:282] 0 containers: []
	W1206 11:27:40.674110  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:27:40.674116  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:27:40.674184  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:27:40.708151  675284 cri.go:89] found id: ""
	I1206 11:27:40.708186  675284 logs.go:282] 0 containers: []
	W1206 11:27:40.708197  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:27:40.708206  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:27:40.708217  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:27:40.785496  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:27:40.785536  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:27:40.803321  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:27:40.803352  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:27:40.885309  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:27:40.885334  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:27:40.885346  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:27:40.920788  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:27:40.920824  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:27:43.500348  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:27:43.516830  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:27:43.517013  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:27:43.561964  675284 cri.go:89] found id: ""
	I1206 11:27:43.561989  675284 logs.go:282] 0 containers: []
	W1206 11:27:43.562004  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:27:43.562012  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:27:43.562084  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:27:43.610589  675284 cri.go:89] found id: ""
	I1206 11:27:43.610613  675284 logs.go:282] 0 containers: []
	W1206 11:27:43.610622  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:27:43.610628  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:27:43.610703  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:27:43.650274  675284 cri.go:89] found id: ""
	I1206 11:27:43.650299  675284 logs.go:282] 0 containers: []
	W1206 11:27:43.650308  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:27:43.650315  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:27:43.650412  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:27:43.697758  675284 cri.go:89] found id: ""
	I1206 11:27:43.697781  675284 logs.go:282] 0 containers: []
	W1206 11:27:43.697793  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:27:43.697800  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:27:43.697864  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:27:43.737489  675284 cri.go:89] found id: ""
	I1206 11:27:43.737570  675284 logs.go:282] 0 containers: []
	W1206 11:27:43.737594  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:27:43.737631  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:27:43.737727  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:27:43.767539  675284 cri.go:89] found id: ""
	I1206 11:27:43.767613  675284 logs.go:282] 0 containers: []
	W1206 11:27:43.767624  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:27:43.767632  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:27:43.767725  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:27:43.810735  675284 cri.go:89] found id: ""
	I1206 11:27:43.810814  675284 logs.go:282] 0 containers: []
	W1206 11:27:43.810837  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:27:43.810856  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:27:43.810969  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:27:43.856353  675284 cri.go:89] found id: ""
	I1206 11:27:43.856387  675284 logs.go:282] 0 containers: []
	W1206 11:27:43.856405  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:27:43.856415  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:27:43.856436  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:27:43.881289  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:27:43.881413  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:27:44.062551  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:27:44.062574  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:27:44.062600  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:27:44.131677  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:27:44.131777  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:27:44.177591  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:27:44.177685  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:27:46.760166  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:27:46.770734  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:27:46.770853  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:27:46.800415  675284 cri.go:89] found id: ""
	I1206 11:27:46.800442  675284 logs.go:282] 0 containers: []
	W1206 11:27:46.800450  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:27:46.800456  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:27:46.800514  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:27:46.828359  675284 cri.go:89] found id: ""
	I1206 11:27:46.828434  675284 logs.go:282] 0 containers: []
	W1206 11:27:46.828457  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:27:46.828476  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:27:46.828560  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:27:46.855258  675284 cri.go:89] found id: ""
	I1206 11:27:46.855331  675284 logs.go:282] 0 containers: []
	W1206 11:27:46.855355  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:27:46.855373  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:27:46.855455  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:27:46.882170  675284 cri.go:89] found id: ""
	I1206 11:27:46.882242  675284 logs.go:282] 0 containers: []
	W1206 11:27:46.882264  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:27:46.882284  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:27:46.882368  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:27:46.908793  675284 cri.go:89] found id: ""
	I1206 11:27:46.908871  675284 logs.go:282] 0 containers: []
	W1206 11:27:46.908893  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:27:46.908911  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:27:46.908993  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:27:46.945341  675284 cri.go:89] found id: ""
	I1206 11:27:46.945415  675284 logs.go:282] 0 containers: []
	W1206 11:27:46.945438  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:27:46.945705  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:27:46.945811  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:27:46.979614  675284 cri.go:89] found id: ""
	I1206 11:27:46.979691  675284 logs.go:282] 0 containers: []
	W1206 11:27:46.979727  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:27:46.979753  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:27:46.979843  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:27:47.009379  675284 cri.go:89] found id: ""
	I1206 11:27:47.009467  675284 logs.go:282] 0 containers: []
	W1206 11:27:47.009489  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:27:47.009513  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:27:47.009552  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:27:47.046751  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:27:47.046830  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:27:47.136379  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:27:47.136467  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:27:47.153336  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:27:47.153363  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:27:47.302626  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:27:47.302688  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:27:47.302728  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:27:49.835902  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:27:49.853873  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:27:49.853945  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:27:49.903659  675284 cri.go:89] found id: ""
	I1206 11:27:49.903682  675284 logs.go:282] 0 containers: []
	W1206 11:27:49.903690  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:27:49.903696  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:27:49.903766  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:27:49.972362  675284 cri.go:89] found id: ""
	I1206 11:27:49.972386  675284 logs.go:282] 0 containers: []
	W1206 11:27:49.972398  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:27:49.972404  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:27:49.972469  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:27:50.029741  675284 cri.go:89] found id: ""
	I1206 11:27:50.029768  675284 logs.go:282] 0 containers: []
	W1206 11:27:50.029776  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:27:50.029784  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:27:50.029861  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:27:50.090111  675284 cri.go:89] found id: ""
	I1206 11:27:50.090137  675284 logs.go:282] 0 containers: []
	W1206 11:27:50.090146  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:27:50.090153  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:27:50.090222  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:27:50.139941  675284 cri.go:89] found id: ""
	I1206 11:27:50.139985  675284 logs.go:282] 0 containers: []
	W1206 11:27:50.139994  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:27:50.140001  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:27:50.140099  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:27:50.200360  675284 cri.go:89] found id: ""
	I1206 11:27:50.200383  675284 logs.go:282] 0 containers: []
	W1206 11:27:50.200391  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:27:50.200398  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:27:50.200460  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:27:50.261937  675284 cri.go:89] found id: ""
	I1206 11:27:50.261958  675284 logs.go:282] 0 containers: []
	W1206 11:27:50.261967  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:27:50.261973  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:27:50.262031  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:27:50.337692  675284 cri.go:89] found id: ""
	I1206 11:27:50.337716  675284 logs.go:282] 0 containers: []
	W1206 11:27:50.337730  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:27:50.337740  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:27:50.337751  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:27:50.451153  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:27:50.451233  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:27:50.478492  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:27:50.478572  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:27:50.583429  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:27:50.583499  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:27:50.583526  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:27:50.636920  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:27:50.640642  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:27:53.187248  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:27:53.201503  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:27:53.201570  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:27:53.269798  675284 cri.go:89] found id: ""
	I1206 11:27:53.269826  675284 logs.go:282] 0 containers: []
	W1206 11:27:53.269834  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:27:53.269841  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:27:53.269914  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:27:53.328351  675284 cri.go:89] found id: ""
	I1206 11:27:53.328372  675284 logs.go:282] 0 containers: []
	W1206 11:27:53.328380  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:27:53.328386  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:27:53.328442  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:27:53.380580  675284 cri.go:89] found id: ""
	I1206 11:27:53.380601  675284 logs.go:282] 0 containers: []
	W1206 11:27:53.380609  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:27:53.380616  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:27:53.380691  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:27:53.413355  675284 cri.go:89] found id: ""
	I1206 11:27:53.413376  675284 logs.go:282] 0 containers: []
	W1206 11:27:53.413384  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:27:53.413390  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:27:53.413466  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:27:53.445509  675284 cri.go:89] found id: ""
	I1206 11:27:53.445596  675284 logs.go:282] 0 containers: []
	W1206 11:27:53.445607  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:27:53.445614  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:27:53.445708  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:27:53.502375  675284 cri.go:89] found id: ""
	I1206 11:27:53.502396  675284 logs.go:282] 0 containers: []
	W1206 11:27:53.502404  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:27:53.502411  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:27:53.502475  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:27:53.534288  675284 cri.go:89] found id: ""
	I1206 11:27:53.534308  675284 logs.go:282] 0 containers: []
	W1206 11:27:53.534317  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:27:53.534323  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:27:53.534397  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:27:53.571973  675284 cri.go:89] found id: ""
	I1206 11:27:53.572006  675284 logs.go:282] 0 containers: []
	W1206 11:27:53.572018  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:27:53.572027  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:27:53.572040  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:27:53.657337  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:27:53.657430  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:27:53.677917  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:27:53.677999  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:27:53.793906  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:27:53.793979  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:27:53.794015  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:27:53.834778  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:27:53.834815  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:27:56.379248  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:27:56.390030  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:27:56.390100  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:27:56.433722  675284 cri.go:89] found id: ""
	I1206 11:27:56.433750  675284 logs.go:282] 0 containers: []
	W1206 11:27:56.433759  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:27:56.433766  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:27:56.433827  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:27:56.489069  675284 cri.go:89] found id: ""
	I1206 11:27:56.489097  675284 logs.go:282] 0 containers: []
	W1206 11:27:56.489115  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:27:56.489121  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:27:56.489187  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:27:56.538708  675284 cri.go:89] found id: ""
	I1206 11:27:56.538736  675284 logs.go:282] 0 containers: []
	W1206 11:27:56.538745  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:27:56.538751  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:27:56.538807  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:27:56.587779  675284 cri.go:89] found id: ""
	I1206 11:27:56.587807  675284 logs.go:282] 0 containers: []
	W1206 11:27:56.587816  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:27:56.587823  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:27:56.587879  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:27:56.641462  675284 cri.go:89] found id: ""
	I1206 11:27:56.641483  675284 logs.go:282] 0 containers: []
	W1206 11:27:56.641491  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:27:56.641560  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:27:56.641649  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:27:56.692440  675284 cri.go:89] found id: ""
	I1206 11:27:56.692461  675284 logs.go:282] 0 containers: []
	W1206 11:27:56.692469  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:27:56.692476  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:27:56.692536  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:27:56.745303  675284 cri.go:89] found id: ""
	I1206 11:27:56.745324  675284 logs.go:282] 0 containers: []
	W1206 11:27:56.745333  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:27:56.745341  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:27:56.745398  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:27:56.779443  675284 cri.go:89] found id: ""
	I1206 11:27:56.779471  675284 logs.go:282] 0 containers: []
	W1206 11:27:56.779479  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:27:56.779487  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:27:56.779501  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:27:56.958846  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:27:56.958865  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:27:56.958878  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:27:57.016664  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:27:57.023151  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:27:57.113515  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:27:57.113593  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:27:57.208270  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:27:57.208346  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:27:59.743763  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:27:59.753826  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:27:59.753926  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:27:59.779996  675284 cri.go:89] found id: ""
	I1206 11:27:59.780022  675284 logs.go:282] 0 containers: []
	W1206 11:27:59.780032  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:27:59.780039  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:27:59.780099  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:27:59.806168  675284 cri.go:89] found id: ""
	I1206 11:27:59.806197  675284 logs.go:282] 0 containers: []
	W1206 11:27:59.806206  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:27:59.806213  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:27:59.806274  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:27:59.834079  675284 cri.go:89] found id: ""
	I1206 11:27:59.834114  675284 logs.go:282] 0 containers: []
	W1206 11:27:59.834123  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:27:59.834129  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:27:59.834193  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:27:59.861991  675284 cri.go:89] found id: ""
	I1206 11:27:59.862020  675284 logs.go:282] 0 containers: []
	W1206 11:27:59.862029  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:27:59.862035  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:27:59.862096  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:27:59.889954  675284 cri.go:89] found id: ""
	I1206 11:27:59.889980  675284 logs.go:282] 0 containers: []
	W1206 11:27:59.889989  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:27:59.889995  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:27:59.890058  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:27:59.917069  675284 cri.go:89] found id: ""
	I1206 11:27:59.917097  675284 logs.go:282] 0 containers: []
	W1206 11:27:59.917106  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:27:59.917112  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:27:59.917175  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:27:59.947051  675284 cri.go:89] found id: ""
	I1206 11:27:59.947077  675284 logs.go:282] 0 containers: []
	W1206 11:27:59.947087  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:27:59.947093  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:27:59.947172  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:27:59.979243  675284 cri.go:89] found id: ""
	I1206 11:27:59.979268  675284 logs.go:282] 0 containers: []
	W1206 11:27:59.979277  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:27:59.979287  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:27:59.979299  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:28:00.260310  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:28:00.260390  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:28:00.260419  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:28:00.317600  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:28:00.317695  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:28:00.363348  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:28:00.363456  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:28:00.464518  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:28:00.464587  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:28:02.986758  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:28:02.997508  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:28:02.997586  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:28:03.025292  675284 cri.go:89] found id: ""
	I1206 11:28:03.025320  675284 logs.go:282] 0 containers: []
	W1206 11:28:03.025330  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:28:03.025337  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:28:03.025412  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:28:03.061185  675284 cri.go:89] found id: ""
	I1206 11:28:03.061210  675284 logs.go:282] 0 containers: []
	W1206 11:28:03.061220  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:28:03.061226  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:28:03.061285  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:28:03.087882  675284 cri.go:89] found id: ""
	I1206 11:28:03.087904  675284 logs.go:282] 0 containers: []
	W1206 11:28:03.087912  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:28:03.087919  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:28:03.087980  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:28:03.114912  675284 cri.go:89] found id: ""
	I1206 11:28:03.114938  675284 logs.go:282] 0 containers: []
	W1206 11:28:03.114947  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:28:03.114954  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:28:03.115028  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:28:03.140676  675284 cri.go:89] found id: ""
	I1206 11:28:03.140699  675284 logs.go:282] 0 containers: []
	W1206 11:28:03.140707  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:28:03.140714  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:28:03.140774  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:28:03.167191  675284 cri.go:89] found id: ""
	I1206 11:28:03.167219  675284 logs.go:282] 0 containers: []
	W1206 11:28:03.167228  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:28:03.167234  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:28:03.167315  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:28:03.193294  675284 cri.go:89] found id: ""
	I1206 11:28:03.193322  675284 logs.go:282] 0 containers: []
	W1206 11:28:03.193331  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:28:03.193338  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:28:03.193399  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:28:03.219072  675284 cri.go:89] found id: ""
	I1206 11:28:03.219106  675284 logs.go:282] 0 containers: []
	W1206 11:28:03.219115  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:28:03.219142  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:28:03.219155  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:28:03.285571  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:28:03.285607  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:28:03.302420  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:28:03.302452  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:28:03.364292  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:28:03.364317  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:28:03.364333  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:28:03.395102  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:28:03.395146  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:28:05.925229  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:28:05.938599  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:28:05.938681  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:28:05.969420  675284 cri.go:89] found id: ""
	I1206 11:28:05.969442  675284 logs.go:282] 0 containers: []
	W1206 11:28:05.969450  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:28:05.969460  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:28:05.969520  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:28:05.997136  675284 cri.go:89] found id: ""
	I1206 11:28:05.997165  675284 logs.go:282] 0 containers: []
	W1206 11:28:05.997176  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:28:05.997183  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:28:05.997246  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:28:06.031716  675284 cri.go:89] found id: ""
	I1206 11:28:06.031751  675284 logs.go:282] 0 containers: []
	W1206 11:28:06.031764  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:28:06.031771  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:28:06.031841  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:28:06.059775  675284 cri.go:89] found id: ""
	I1206 11:28:06.059838  675284 logs.go:282] 0 containers: []
	W1206 11:28:06.059847  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:28:06.059854  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:28:06.059918  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:28:06.087617  675284 cri.go:89] found id: ""
	I1206 11:28:06.087657  675284 logs.go:282] 0 containers: []
	W1206 11:28:06.087667  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:28:06.087677  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:28:06.087754  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:28:06.125963  675284 cri.go:89] found id: ""
	I1206 11:28:06.125988  675284 logs.go:282] 0 containers: []
	W1206 11:28:06.125997  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:28:06.126003  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:28:06.126067  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:28:06.152243  675284 cri.go:89] found id: ""
	I1206 11:28:06.152270  675284 logs.go:282] 0 containers: []
	W1206 11:28:06.152279  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:28:06.152285  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:28:06.152346  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:28:06.178707  675284 cri.go:89] found id: ""
	I1206 11:28:06.178731  675284 logs.go:282] 0 containers: []
	W1206 11:28:06.178739  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:28:06.178748  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:28:06.178759  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:28:06.194981  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:28:06.195011  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:28:06.261680  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:28:06.261704  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:28:06.261719  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:28:06.293177  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:28:06.293210  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:28:06.324750  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:28:06.324779  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:28:08.893204  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:28:08.903416  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:28:08.903489  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:28:08.928475  675284 cri.go:89] found id: ""
	I1206 11:28:08.928501  675284 logs.go:282] 0 containers: []
	W1206 11:28:08.928510  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:28:08.928516  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:28:08.928578  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:28:08.955595  675284 cri.go:89] found id: ""
	I1206 11:28:08.955622  675284 logs.go:282] 0 containers: []
	W1206 11:28:08.955631  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:28:08.955638  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:28:08.955700  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:28:08.986475  675284 cri.go:89] found id: ""
	I1206 11:28:08.986501  675284 logs.go:282] 0 containers: []
	W1206 11:28:08.986509  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:28:08.986516  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:28:08.986574  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:28:09.018382  675284 cri.go:89] found id: ""
	I1206 11:28:09.018411  675284 logs.go:282] 0 containers: []
	W1206 11:28:09.018421  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:28:09.018428  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:28:09.018488  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:28:09.045002  675284 cri.go:89] found id: ""
	I1206 11:28:09.045030  675284 logs.go:282] 0 containers: []
	W1206 11:28:09.045039  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:28:09.045046  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:28:09.045111  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:28:09.071028  675284 cri.go:89] found id: ""
	I1206 11:28:09.071055  675284 logs.go:282] 0 containers: []
	W1206 11:28:09.071064  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:28:09.071071  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:28:09.071164  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:28:09.096747  675284 cri.go:89] found id: ""
	I1206 11:28:09.096775  675284 logs.go:282] 0 containers: []
	W1206 11:28:09.096783  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:28:09.096792  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:28:09.096851  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:28:09.122970  675284 cri.go:89] found id: ""
	I1206 11:28:09.122996  675284 logs.go:282] 0 containers: []
	W1206 11:28:09.123004  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:28:09.123014  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:28:09.123025  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:28:09.190675  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:28:09.190711  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:28:09.207332  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:28:09.207365  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:28:09.274816  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:28:09.274840  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:28:09.274856  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:28:09.306243  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:28:09.306281  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:28:11.836469  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:28:11.846735  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:28:11.846807  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:28:11.872806  675284 cri.go:89] found id: ""
	I1206 11:28:11.872830  675284 logs.go:282] 0 containers: []
	W1206 11:28:11.872839  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:28:11.872845  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:28:11.872902  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:28:11.898065  675284 cri.go:89] found id: ""
	I1206 11:28:11.898090  675284 logs.go:282] 0 containers: []
	W1206 11:28:11.898099  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:28:11.898106  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:28:11.898163  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:28:11.928112  675284 cri.go:89] found id: ""
	I1206 11:28:11.928136  675284 logs.go:282] 0 containers: []
	W1206 11:28:11.928144  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:28:11.928151  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:28:11.928208  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:28:11.957358  675284 cri.go:89] found id: ""
	I1206 11:28:11.957382  675284 logs.go:282] 0 containers: []
	W1206 11:28:11.957390  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:28:11.957397  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:28:11.957453  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:28:11.986150  675284 cri.go:89] found id: ""
	I1206 11:28:11.986176  675284 logs.go:282] 0 containers: []
	W1206 11:28:11.986185  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:28:11.986192  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:28:11.986306  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:28:12.016075  675284 cri.go:89] found id: ""
	I1206 11:28:12.016119  675284 logs.go:282] 0 containers: []
	W1206 11:28:12.016128  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:28:12.016135  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:28:12.016207  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:28:12.043405  675284 cri.go:89] found id: ""
	I1206 11:28:12.043429  675284 logs.go:282] 0 containers: []
	W1206 11:28:12.043437  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:28:12.043444  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:28:12.043504  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:28:12.075117  675284 cri.go:89] found id: ""
	I1206 11:28:12.075167  675284 logs.go:282] 0 containers: []
	W1206 11:28:12.075176  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:28:12.075186  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:28:12.075199  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:28:12.147249  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:28:12.147270  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:28:12.147284  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:28:12.180745  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:28:12.180782  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:28:12.209786  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:28:12.209813  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:28:12.277714  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:28:12.277748  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:28:14.795302  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:28:14.808379  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:28:14.808473  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:28:14.834451  675284 cri.go:89] found id: ""
	I1206 11:28:14.834478  675284 logs.go:282] 0 containers: []
	W1206 11:28:14.834487  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:28:14.834493  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:28:14.834558  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:28:14.860014  675284 cri.go:89] found id: ""
	I1206 11:28:14.860043  675284 logs.go:282] 0 containers: []
	W1206 11:28:14.860052  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:28:14.860059  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:28:14.860118  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:28:14.888933  675284 cri.go:89] found id: ""
	I1206 11:28:14.888956  675284 logs.go:282] 0 containers: []
	W1206 11:28:14.888965  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:28:14.888971  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:28:14.889029  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:28:14.915200  675284 cri.go:89] found id: ""
	I1206 11:28:14.915235  675284 logs.go:282] 0 containers: []
	W1206 11:28:14.915245  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:28:14.915276  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:28:14.915363  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:28:14.945737  675284 cri.go:89] found id: ""
	I1206 11:28:14.945765  675284 logs.go:282] 0 containers: []
	W1206 11:28:14.945775  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:28:14.945782  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:28:14.945847  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:28:14.976615  675284 cri.go:89] found id: ""
	I1206 11:28:14.976643  675284 logs.go:282] 0 containers: []
	W1206 11:28:14.976651  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:28:14.976658  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:28:14.976723  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:28:15.010016  675284 cri.go:89] found id: ""
	I1206 11:28:15.010043  675284 logs.go:282] 0 containers: []
	W1206 11:28:15.010054  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:28:15.010060  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:28:15.010145  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:28:15.047863  675284 cri.go:89] found id: ""
	I1206 11:28:15.047943  675284 logs.go:282] 0 containers: []
	W1206 11:28:15.047968  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:28:15.047991  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:28:15.048017  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:28:15.118173  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:28:15.118247  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:28:15.118276  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:28:15.150090  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:28:15.150125  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:28:15.179024  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:28:15.179052  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:28:15.246165  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:28:15.246202  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:28:17.763272  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:28:17.773512  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:28:17.773583  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:28:17.802585  675284 cri.go:89] found id: ""
	I1206 11:28:17.802608  675284 logs.go:282] 0 containers: []
	W1206 11:28:17.802616  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:28:17.802622  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:28:17.802680  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:28:17.827313  675284 cri.go:89] found id: ""
	I1206 11:28:17.827339  675284 logs.go:282] 0 containers: []
	W1206 11:28:17.827349  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:28:17.827355  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:28:17.827412  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:28:17.853032  675284 cri.go:89] found id: ""
	I1206 11:28:17.853058  675284 logs.go:282] 0 containers: []
	W1206 11:28:17.853068  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:28:17.853074  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:28:17.853133  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:28:17.882323  675284 cri.go:89] found id: ""
	I1206 11:28:17.882351  675284 logs.go:282] 0 containers: []
	W1206 11:28:17.882360  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:28:17.882367  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:28:17.882426  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:28:17.907732  675284 cri.go:89] found id: ""
	I1206 11:28:17.907757  675284 logs.go:282] 0 containers: []
	W1206 11:28:17.907768  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:28:17.907774  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:28:17.907837  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:28:17.934069  675284 cri.go:89] found id: ""
	I1206 11:28:17.934096  675284 logs.go:282] 0 containers: []
	W1206 11:28:17.934105  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:28:17.934112  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:28:17.934173  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:28:17.963536  675284 cri.go:89] found id: ""
	I1206 11:28:17.963565  675284 logs.go:282] 0 containers: []
	W1206 11:28:17.963574  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:28:17.963581  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:28:17.963641  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:28:17.989324  675284 cri.go:89] found id: ""
	I1206 11:28:17.989353  675284 logs.go:282] 0 containers: []
	W1206 11:28:17.989362  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:28:17.989371  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:28:17.989382  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:28:18.026126  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:28:18.026162  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:28:18.058009  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:28:18.058050  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:28:18.128861  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:28:18.128898  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:28:18.145391  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:28:18.145420  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:28:18.213332  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:28:20.713543  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:28:20.726300  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:28:20.726369  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:28:20.780837  675284 cri.go:89] found id: ""
	I1206 11:28:20.780867  675284 logs.go:282] 0 containers: []
	W1206 11:28:20.780876  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:28:20.780884  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:28:20.780939  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:28:20.809044  675284 cri.go:89] found id: ""
	I1206 11:28:20.809073  675284 logs.go:282] 0 containers: []
	W1206 11:28:20.809081  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:28:20.809088  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:28:20.809144  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:28:20.837439  675284 cri.go:89] found id: ""
	I1206 11:28:20.837466  675284 logs.go:282] 0 containers: []
	W1206 11:28:20.837475  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:28:20.837481  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:28:20.837543  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:28:20.873450  675284 cri.go:89] found id: ""
	I1206 11:28:20.873480  675284 logs.go:282] 0 containers: []
	W1206 11:28:20.873488  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:28:20.873495  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:28:20.873554  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:28:20.901292  675284 cri.go:89] found id: ""
	I1206 11:28:20.901320  675284 logs.go:282] 0 containers: []
	W1206 11:28:20.901328  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:28:20.901335  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:28:20.901394  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:28:20.935177  675284 cri.go:89] found id: ""
	I1206 11:28:20.935206  675284 logs.go:282] 0 containers: []
	W1206 11:28:20.935216  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:28:20.935223  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:28:20.935284  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:28:20.985311  675284 cri.go:89] found id: ""
	I1206 11:28:20.985340  675284 logs.go:282] 0 containers: []
	W1206 11:28:20.985349  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:28:20.985356  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:28:20.985420  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:28:21.016518  675284 cri.go:89] found id: ""
	I1206 11:28:21.016540  675284 logs.go:282] 0 containers: []
	W1206 11:28:21.016550  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:28:21.016559  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:28:21.016570  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:28:21.061561  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:28:21.061638  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:28:21.137347  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:28:21.137425  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:28:21.153841  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:28:21.153869  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:28:21.236680  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:28:21.236703  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:28:21.236717  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:28:23.773746  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:28:23.783968  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:28:23.784044  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:28:23.814260  675284 cri.go:89] found id: ""
	I1206 11:28:23.814284  675284 logs.go:282] 0 containers: []
	W1206 11:28:23.814293  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:28:23.814299  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:28:23.814359  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:28:23.841338  675284 cri.go:89] found id: ""
	I1206 11:28:23.841366  675284 logs.go:282] 0 containers: []
	W1206 11:28:23.841376  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:28:23.841383  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:28:23.841446  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:28:23.868192  675284 cri.go:89] found id: ""
	I1206 11:28:23.868219  675284 logs.go:282] 0 containers: []
	W1206 11:28:23.868228  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:28:23.868235  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:28:23.868291  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:28:23.893838  675284 cri.go:89] found id: ""
	I1206 11:28:23.893868  675284 logs.go:282] 0 containers: []
	W1206 11:28:23.893876  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:28:23.893883  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:28:23.893947  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:28:23.920483  675284 cri.go:89] found id: ""
	I1206 11:28:23.920505  675284 logs.go:282] 0 containers: []
	W1206 11:28:23.920513  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:28:23.920520  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:28:23.920585  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:28:23.952152  675284 cri.go:89] found id: ""
	I1206 11:28:23.952176  675284 logs.go:282] 0 containers: []
	W1206 11:28:23.952184  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:28:23.952191  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:28:23.952252  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:28:23.978957  675284 cri.go:89] found id: ""
	I1206 11:28:23.978986  675284 logs.go:282] 0 containers: []
	W1206 11:28:23.978995  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:28:23.979002  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:28:23.979059  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:28:24.008697  675284 cri.go:89] found id: ""
	I1206 11:28:24.008736  675284 logs.go:282] 0 containers: []
	W1206 11:28:24.008745  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:28:24.008773  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:28:24.008793  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:28:24.080503  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:28:24.080539  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:28:24.101212  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:28:24.101241  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:28:24.169137  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:28:24.169160  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:28:24.169174  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:28:24.200428  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:28:24.200463  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:28:26.731465  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:28:26.741368  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:28:26.741437  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:28:26.765463  675284 cri.go:89] found id: ""
	I1206 11:28:26.765491  675284 logs.go:282] 0 containers: []
	W1206 11:28:26.765499  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:28:26.765505  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:28:26.765560  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:28:26.793003  675284 cri.go:89] found id: ""
	I1206 11:28:26.793029  675284 logs.go:282] 0 containers: []
	W1206 11:28:26.793038  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:28:26.793044  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:28:26.793105  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:28:26.817647  675284 cri.go:89] found id: ""
	I1206 11:28:26.817674  675284 logs.go:282] 0 containers: []
	W1206 11:28:26.817683  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:28:26.817689  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:28:26.817752  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:28:26.843231  675284 cri.go:89] found id: ""
	I1206 11:28:26.843256  675284 logs.go:282] 0 containers: []
	W1206 11:28:26.843265  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:28:26.843272  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:28:26.843330  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:28:26.872800  675284 cri.go:89] found id: ""
	I1206 11:28:26.872826  675284 logs.go:282] 0 containers: []
	W1206 11:28:26.872835  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:28:26.872842  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:28:26.872901  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:28:26.897613  675284 cri.go:89] found id: ""
	I1206 11:28:26.897642  675284 logs.go:282] 0 containers: []
	W1206 11:28:26.897651  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:28:26.897664  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:28:26.897720  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:28:26.923038  675284 cri.go:89] found id: ""
	I1206 11:28:26.923066  675284 logs.go:282] 0 containers: []
	W1206 11:28:26.923076  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:28:26.923082  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:28:26.923195  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:28:26.949586  675284 cri.go:89] found id: ""
	I1206 11:28:26.949612  675284 logs.go:282] 0 containers: []
	W1206 11:28:26.949621  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:28:26.949630  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:28:26.949672  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:28:27.012553  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:28:27.012571  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:28:27.012586  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:28:27.053558  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:28:27.053617  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:28:27.083319  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:28:27.083353  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:28:27.154517  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:28:27.154556  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:28:29.671291  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:28:29.690100  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:28:29.690176  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:28:29.731029  675284 cri.go:89] found id: ""
	I1206 11:28:29.731058  675284 logs.go:282] 0 containers: []
	W1206 11:28:29.731069  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:28:29.731076  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:28:29.731169  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:28:29.797751  675284 cri.go:89] found id: ""
	I1206 11:28:29.797781  675284 logs.go:282] 0 containers: []
	W1206 11:28:29.797790  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:28:29.797797  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:28:29.797855  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:28:29.836990  675284 cri.go:89] found id: ""
	I1206 11:28:29.837020  675284 logs.go:282] 0 containers: []
	W1206 11:28:29.837029  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:28:29.837036  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:28:29.837093  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:28:29.867305  675284 cri.go:89] found id: ""
	I1206 11:28:29.867332  675284 logs.go:282] 0 containers: []
	W1206 11:28:29.867342  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:28:29.867348  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:28:29.867412  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:28:29.895837  675284 cri.go:89] found id: ""
	I1206 11:28:29.895862  675284 logs.go:282] 0 containers: []
	W1206 11:28:29.895871  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:28:29.895877  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:28:29.895935  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:28:29.929755  675284 cri.go:89] found id: ""
	I1206 11:28:29.929783  675284 logs.go:282] 0 containers: []
	W1206 11:28:29.929792  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:28:29.929798  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:28:29.929858  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:28:29.969653  675284 cri.go:89] found id: ""
	I1206 11:28:29.969681  675284 logs.go:282] 0 containers: []
	W1206 11:28:29.969690  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:28:29.969697  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:28:29.969753  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:28:30.007846  675284 cri.go:89] found id: ""
	I1206 11:28:30.007888  675284 logs.go:282] 0 containers: []
	W1206 11:28:30.007899  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:28:30.007909  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:28:30.007921  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:28:30.051110  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:28:30.051286  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:28:30.087972  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:28:30.088012  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:28:30.170013  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:28:30.170046  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:28:30.188973  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:28:30.189003  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:28:30.264313  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:28:32.764576  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:28:32.776142  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:28:32.776216  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:28:32.815234  675284 cri.go:89] found id: ""
	I1206 11:28:32.815263  675284 logs.go:282] 0 containers: []
	W1206 11:28:32.815272  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:28:32.815279  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:28:32.815337  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:28:32.853202  675284 cri.go:89] found id: ""
	I1206 11:28:32.853224  675284 logs.go:282] 0 containers: []
	W1206 11:28:32.853233  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:28:32.853239  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:28:32.853299  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:28:32.896595  675284 cri.go:89] found id: ""
	I1206 11:28:32.896622  675284 logs.go:282] 0 containers: []
	W1206 11:28:32.896631  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:28:32.896637  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:28:32.896695  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:28:32.934973  675284 cri.go:89] found id: ""
	I1206 11:28:32.934995  675284 logs.go:282] 0 containers: []
	W1206 11:28:32.935004  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:28:32.935010  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:28:32.935072  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:28:32.969066  675284 cri.go:89] found id: ""
	I1206 11:28:32.969089  675284 logs.go:282] 0 containers: []
	W1206 11:28:32.969097  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:28:32.969103  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:28:32.969161  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:28:32.997851  675284 cri.go:89] found id: ""
	I1206 11:28:32.997872  675284 logs.go:282] 0 containers: []
	W1206 11:28:32.997880  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:28:32.997887  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:28:32.997959  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:28:33.035801  675284 cri.go:89] found id: ""
	I1206 11:28:33.035824  675284 logs.go:282] 0 containers: []
	W1206 11:28:33.035833  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:28:33.035839  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:28:33.035900  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:28:33.066281  675284 cri.go:89] found id: ""
	I1206 11:28:33.066306  675284 logs.go:282] 0 containers: []
	W1206 11:28:33.066322  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:28:33.066334  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:28:33.066345  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:28:33.147448  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:28:33.147526  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:28:33.165853  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:28:33.165877  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:28:33.254257  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:28:33.254275  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:28:33.254287  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:28:33.290588  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:28:33.290624  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:28:35.835069  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:28:35.847106  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:28:35.847204  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:28:35.880785  675284 cri.go:89] found id: ""
	I1206 11:28:35.880810  675284 logs.go:282] 0 containers: []
	W1206 11:28:35.880818  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:28:35.880825  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:28:35.880888  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:28:35.916072  675284 cri.go:89] found id: ""
	I1206 11:28:35.916100  675284 logs.go:282] 0 containers: []
	W1206 11:28:35.916108  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:28:35.916114  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:28:35.916178  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:28:35.956671  675284 cri.go:89] found id: ""
	I1206 11:28:35.956694  675284 logs.go:282] 0 containers: []
	W1206 11:28:35.956703  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:28:35.956711  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:28:35.956808  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:28:35.997591  675284 cri.go:89] found id: ""
	I1206 11:28:35.997615  675284 logs.go:282] 0 containers: []
	W1206 11:28:35.997623  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:28:35.997630  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:28:35.997694  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:28:36.037046  675284 cri.go:89] found id: ""
	I1206 11:28:36.037068  675284 logs.go:282] 0 containers: []
	W1206 11:28:36.037076  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:28:36.037082  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:28:36.037141  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:28:36.071915  675284 cri.go:89] found id: ""
	I1206 11:28:36.071937  675284 logs.go:282] 0 containers: []
	W1206 11:28:36.071946  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:28:36.071952  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:28:36.072012  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:28:36.105624  675284 cri.go:89] found id: ""
	I1206 11:28:36.105646  675284 logs.go:282] 0 containers: []
	W1206 11:28:36.105654  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:28:36.105660  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:28:36.105718  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:28:36.138972  675284 cri.go:89] found id: ""
	I1206 11:28:36.138993  675284 logs.go:282] 0 containers: []
	W1206 11:28:36.139001  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:28:36.139010  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:28:36.139022  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:28:36.176620  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:28:36.176654  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:28:36.216918  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:28:36.216993  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:28:36.287398  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:28:36.287487  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:28:36.306720  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:28:36.306828  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:28:36.394369  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:28:38.895617  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:28:38.919897  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:28:38.919983  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:28:38.975888  675284 cri.go:89] found id: ""
	I1206 11:28:38.975915  675284 logs.go:282] 0 containers: []
	W1206 11:28:38.975926  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:28:38.975932  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:28:38.975993  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:28:39.021905  675284 cri.go:89] found id: ""
	I1206 11:28:39.021939  675284 logs.go:282] 0 containers: []
	W1206 11:28:39.021949  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:28:39.021955  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:28:39.022015  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:28:39.057158  675284 cri.go:89] found id: ""
	I1206 11:28:39.057178  675284 logs.go:282] 0 containers: []
	W1206 11:28:39.057186  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:28:39.057192  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:28:39.057246  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:28:39.119390  675284 cri.go:89] found id: ""
	I1206 11:28:39.119410  675284 logs.go:282] 0 containers: []
	W1206 11:28:39.119418  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:28:39.119424  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:28:39.119480  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:28:39.157583  675284 cri.go:89] found id: ""
	I1206 11:28:39.157606  675284 logs.go:282] 0 containers: []
	W1206 11:28:39.157615  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:28:39.157621  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:28:39.157680  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:28:39.205313  675284 cri.go:89] found id: ""
	I1206 11:28:39.205340  675284 logs.go:282] 0 containers: []
	W1206 11:28:39.205348  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:28:39.205355  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:28:39.205411  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:28:39.253364  675284 cri.go:89] found id: ""
	I1206 11:28:39.253391  675284 logs.go:282] 0 containers: []
	W1206 11:28:39.253400  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:28:39.253406  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:28:39.253466  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:28:39.301730  675284 cri.go:89] found id: ""
	I1206 11:28:39.301757  675284 logs.go:282] 0 containers: []
	W1206 11:28:39.301766  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:28:39.301775  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:28:39.301785  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:28:39.400347  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:28:39.400384  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:28:39.423414  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:28:39.423445  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:28:39.570450  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:28:39.570474  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:28:39.570489  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:28:39.618062  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:28:39.618098  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:28:42.174445  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:28:42.191430  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:28:42.191506  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:28:42.231650  675284 cri.go:89] found id: ""
	I1206 11:28:42.231684  675284 logs.go:282] 0 containers: []
	W1206 11:28:42.231696  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:28:42.231711  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:28:42.231794  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:28:42.267758  675284 cri.go:89] found id: ""
	I1206 11:28:42.267785  675284 logs.go:282] 0 containers: []
	W1206 11:28:42.267794  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:28:42.267801  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:28:42.267868  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:28:42.298564  675284 cri.go:89] found id: ""
	I1206 11:28:42.298590  675284 logs.go:282] 0 containers: []
	W1206 11:28:42.298599  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:28:42.298606  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:28:42.298672  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:28:42.328125  675284 cri.go:89] found id: ""
	I1206 11:28:42.328150  675284 logs.go:282] 0 containers: []
	W1206 11:28:42.328159  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:28:42.328166  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:28:42.328244  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:28:42.359399  675284 cri.go:89] found id: ""
	I1206 11:28:42.359437  675284 logs.go:282] 0 containers: []
	W1206 11:28:42.359446  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:28:42.359453  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:28:42.359514  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:28:42.387096  675284 cri.go:89] found id: ""
	I1206 11:28:42.387195  675284 logs.go:282] 0 containers: []
	W1206 11:28:42.387222  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:28:42.387238  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:28:42.387307  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:28:42.441187  675284 cri.go:89] found id: ""
	I1206 11:28:42.441261  675284 logs.go:282] 0 containers: []
	W1206 11:28:42.441284  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:28:42.441304  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:28:42.441385  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:28:42.475535  675284 cri.go:89] found id: ""
	I1206 11:28:42.475608  675284 logs.go:282] 0 containers: []
	W1206 11:28:42.475631  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:28:42.475654  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:28:42.475687  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:28:42.560097  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:28:42.560184  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:28:42.579288  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:28:42.579368  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:28:42.727503  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:28:42.727574  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:28:42.727603  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:28:42.779051  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:28:42.779091  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:28:45.316733  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:28:45.340611  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:28:45.340686  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:28:45.419323  675284 cri.go:89] found id: ""
	I1206 11:28:45.419345  675284 logs.go:282] 0 containers: []
	W1206 11:28:45.419353  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:28:45.419359  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:28:45.419413  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:28:45.520598  675284 cri.go:89] found id: ""
	I1206 11:28:45.520621  675284 logs.go:282] 0 containers: []
	W1206 11:28:45.520630  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:28:45.520636  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:28:45.520694  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:28:45.574700  675284 cri.go:89] found id: ""
	I1206 11:28:45.574721  675284 logs.go:282] 0 containers: []
	W1206 11:28:45.574728  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:28:45.574734  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:28:45.574784  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:28:45.633342  675284 cri.go:89] found id: ""
	I1206 11:28:45.633373  675284 logs.go:282] 0 containers: []
	W1206 11:28:45.633382  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:28:45.633389  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:28:45.633453  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:28:45.697708  675284 cri.go:89] found id: ""
	I1206 11:28:45.697736  675284 logs.go:282] 0 containers: []
	W1206 11:28:45.697745  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:28:45.697752  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:28:45.697812  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:28:45.738494  675284 cri.go:89] found id: ""
	I1206 11:28:45.738521  675284 logs.go:282] 0 containers: []
	W1206 11:28:45.738531  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:28:45.738543  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:28:45.738619  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:28:45.813457  675284 cri.go:89] found id: ""
	I1206 11:28:45.813485  675284 logs.go:282] 0 containers: []
	W1206 11:28:45.813493  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:28:45.813500  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:28:45.813560  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:28:45.866304  675284 cri.go:89] found id: ""
	I1206 11:28:45.866330  675284 logs.go:282] 0 containers: []
	W1206 11:28:45.866339  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:28:45.866347  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:28:45.866368  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:28:46.010132  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:28:46.010152  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:28:46.010166  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:28:46.061060  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:28:46.061100  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:28:46.107571  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:28:46.107603  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:28:46.206283  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:28:46.206322  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:28:48.744981  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:28:48.762020  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:28:48.762098  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:28:48.803616  675284 cri.go:89] found id: ""
	I1206 11:28:48.803635  675284 logs.go:282] 0 containers: []
	W1206 11:28:48.803643  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:28:48.803649  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:28:48.803710  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:28:48.833223  675284 cri.go:89] found id: ""
	I1206 11:28:48.833323  675284 logs.go:282] 0 containers: []
	W1206 11:28:48.833340  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:28:48.833351  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:28:48.833425  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:28:48.870207  675284 cri.go:89] found id: ""
	I1206 11:28:48.870229  675284 logs.go:282] 0 containers: []
	W1206 11:28:48.870238  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:28:48.870244  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:28:48.870302  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:28:48.907740  675284 cri.go:89] found id: ""
	I1206 11:28:48.907789  675284 logs.go:282] 0 containers: []
	W1206 11:28:48.907798  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:28:48.907804  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:28:48.907870  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:28:48.946559  675284 cri.go:89] found id: ""
	I1206 11:28:48.946587  675284 logs.go:282] 0 containers: []
	W1206 11:28:48.946596  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:28:48.946606  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:28:48.946664  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:28:48.995249  675284 cri.go:89] found id: ""
	I1206 11:28:48.995276  675284 logs.go:282] 0 containers: []
	W1206 11:28:48.995284  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:28:48.995291  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:28:48.995350  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:28:49.026442  675284 cri.go:89] found id: ""
	I1206 11:28:49.026469  675284 logs.go:282] 0 containers: []
	W1206 11:28:49.026479  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:28:49.026486  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:28:49.026546  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:28:49.059464  675284 cri.go:89] found id: ""
	I1206 11:28:49.059487  675284 logs.go:282] 0 containers: []
	W1206 11:28:49.059495  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:28:49.059504  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:28:49.059514  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:28:49.143648  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:28:49.143671  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:28:49.143684  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:28:49.177306  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:28:49.177333  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:28:49.214817  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:28:49.214841  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:28:49.298621  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:28:49.298673  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:28:51.831589  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:28:51.842555  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:28:51.842630  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:28:51.873401  675284 cri.go:89] found id: ""
	I1206 11:28:51.873428  675284 logs.go:282] 0 containers: []
	W1206 11:28:51.873437  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:28:51.873443  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:28:51.873501  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:28:51.905438  675284 cri.go:89] found id: ""
	I1206 11:28:51.905464  675284 logs.go:282] 0 containers: []
	W1206 11:28:51.905473  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:28:51.905479  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:28:51.905538  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:28:51.939716  675284 cri.go:89] found id: ""
	I1206 11:28:51.939742  675284 logs.go:282] 0 containers: []
	W1206 11:28:51.939750  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:28:51.939763  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:28:51.939824  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:28:51.968064  675284 cri.go:89] found id: ""
	I1206 11:28:51.968091  675284 logs.go:282] 0 containers: []
	W1206 11:28:51.968100  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:28:51.968106  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:28:51.968178  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:28:52.001336  675284 cri.go:89] found id: ""
	I1206 11:28:52.001366  675284 logs.go:282] 0 containers: []
	W1206 11:28:52.001375  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:28:52.001393  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:28:52.001462  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:28:52.063566  675284 cri.go:89] found id: ""
	I1206 11:28:52.063594  675284 logs.go:282] 0 containers: []
	W1206 11:28:52.063604  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:28:52.063611  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:28:52.063822  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:28:52.115146  675284 cri.go:89] found id: ""
	I1206 11:28:52.115220  675284 logs.go:282] 0 containers: []
	W1206 11:28:52.115243  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:28:52.115263  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:28:52.115373  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:28:52.144547  675284 cri.go:89] found id: ""
	I1206 11:28:52.144567  675284 logs.go:282] 0 containers: []
	W1206 11:28:52.144576  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:28:52.144584  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:28:52.144595  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:28:52.219276  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:28:52.219368  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:28:52.240339  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:28:52.240415  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:28:52.348785  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:28:52.348849  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:28:52.348876  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:28:52.382748  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:28:52.382827  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:28:54.928397  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:28:54.941878  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:28:54.941964  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:28:54.991520  675284 cri.go:89] found id: ""
	I1206 11:28:54.991542  675284 logs.go:282] 0 containers: []
	W1206 11:28:54.991550  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:28:54.991557  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:28:54.991643  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:28:55.050097  675284 cri.go:89] found id: ""
	I1206 11:28:55.050124  675284 logs.go:282] 0 containers: []
	W1206 11:28:55.050132  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:28:55.050139  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:28:55.050199  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:28:55.090133  675284 cri.go:89] found id: ""
	I1206 11:28:55.090186  675284 logs.go:282] 0 containers: []
	W1206 11:28:55.090196  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:28:55.090203  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:28:55.090267  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:28:55.181789  675284 cri.go:89] found id: ""
	I1206 11:28:55.181837  675284 logs.go:282] 0 containers: []
	W1206 11:28:55.181848  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:28:55.181855  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:28:55.181921  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:28:55.248662  675284 cri.go:89] found id: ""
	I1206 11:28:55.248685  675284 logs.go:282] 0 containers: []
	W1206 11:28:55.248694  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:28:55.248700  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:28:55.248757  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:28:55.301470  675284 cri.go:89] found id: ""
	I1206 11:28:55.301491  675284 logs.go:282] 0 containers: []
	W1206 11:28:55.301499  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:28:55.301506  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:28:55.301564  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:28:55.349700  675284 cri.go:89] found id: ""
	I1206 11:28:55.349723  675284 logs.go:282] 0 containers: []
	W1206 11:28:55.349731  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:28:55.349737  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:28:55.349796  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:28:55.415546  675284 cri.go:89] found id: ""
	I1206 11:28:55.415634  675284 logs.go:282] 0 containers: []
	W1206 11:28:55.415647  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:28:55.415656  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:28:55.415668  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:28:55.492723  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:28:55.492749  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:28:55.746752  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:28:55.746771  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:28:55.746784  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:28:55.808463  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:28:55.808539  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:28:55.894881  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:28:55.894908  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:28:58.490804  675284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:28:58.500697  675284 kubeadm.go:602] duration metric: took 4m5.019495361s to restartPrimaryControlPlane
	W1206 11:28:58.500807  675284 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 11:28:58.500875  675284 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 11:28:58.949291  675284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:28:58.965298  675284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 11:28:58.977082  675284 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:28:58.977173  675284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:28:58.990842  675284 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:28:58.990890  675284 kubeadm.go:158] found existing configuration files:
	
	I1206 11:28:58.991030  675284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:28:59.005284  675284 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:28:59.005384  675284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:28:59.020893  675284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:28:59.029629  675284 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:28:59.029750  675284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:28:59.038261  675284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:28:59.046699  675284 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:28:59.046767  675284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:28:59.059744  675284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:28:59.072566  675284 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:28:59.072636  675284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:28:59.086126  675284 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:28:59.133206  675284 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 11:28:59.133466  675284 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:28:59.251862  675284 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:28:59.251938  675284 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:28:59.251982  675284 kubeadm.go:319] OS: Linux
	I1206 11:28:59.252034  675284 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:28:59.252088  675284 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:28:59.252138  675284 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:28:59.252190  675284 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:28:59.252247  675284 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:28:59.252298  675284 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:28:59.252349  675284 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:28:59.252399  675284 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:28:59.252449  675284 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:28:59.339206  675284 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:28:59.339322  675284 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:28:59.339440  675284 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:28:59.376668  675284 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:28:59.381694  675284 out.go:252]   - Generating certificates and keys ...
	I1206 11:28:59.381790  675284 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:28:59.387114  675284 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:28:59.387215  675284 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 11:28:59.387307  675284 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 11:28:59.387388  675284 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 11:28:59.387445  675284 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 11:28:59.387516  675284 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 11:28:59.387578  675284 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 11:28:59.387652  675284 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 11:28:59.388082  675284 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 11:28:59.389222  675284 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 11:28:59.389519  675284 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:28:59.570490  675284 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:28:59.798483  675284 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:29:00.007972  675284 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:29:00.279765  675284 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:29:00.580214  675284 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:29:00.581324  675284 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:29:00.584261  675284 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:29:00.590464  675284 out.go:252]   - Booting up control plane ...
	I1206 11:29:00.590581  675284 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:29:00.590666  675284 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:29:00.591048  675284 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:29:00.607558  675284 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:29:00.607681  675284 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:29:00.615620  675284 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:29:00.615725  675284 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:29:00.615769  675284 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:29:00.791241  675284 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:29:00.791381  675284 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:33:00.791052  675284 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000242847s
	I1206 11:33:00.791093  675284 kubeadm.go:319] 
	I1206 11:33:00.791167  675284 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:33:00.791205  675284 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:33:00.791318  675284 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:33:00.791332  675284 kubeadm.go:319] 
	I1206 11:33:00.791437  675284 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:33:00.791472  675284 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:33:00.791506  675284 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:33:00.791514  675284 kubeadm.go:319] 
	I1206 11:33:00.795055  675284 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:33:00.795577  675284 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:33:00.795698  675284 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:33:00.795936  675284 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:33:00.795946  675284 kubeadm.go:319] 
	I1206 11:33:00.796014  675284 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 11:33:00.796125  675284 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000242847s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000242847s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 11:33:00.796215  675284 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 11:33:01.237677  675284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:33:01.259770  675284 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:33:01.259846  675284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:33:01.271684  675284 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:33:01.271708  675284 kubeadm.go:158] found existing configuration files:
	
	I1206 11:33:01.271760  675284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:33:01.282762  675284 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:33:01.282865  675284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:33:01.292352  675284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:33:01.304060  675284 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:33:01.304120  675284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:33:01.312524  675284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:33:01.322149  675284 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:33:01.322216  675284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:33:01.330808  675284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:33:01.340855  675284 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:33:01.340972  675284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:33:01.349829  675284 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:33:01.402394  675284 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 11:33:01.403059  675284 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:33:01.520019  675284 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:33:01.520090  675284 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:33:01.520126  675284 kubeadm.go:319] OS: Linux
	I1206 11:33:01.520171  675284 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:33:01.520220  675284 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:33:01.520267  675284 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:33:01.520315  675284 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:33:01.520365  675284 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:33:01.520420  675284 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:33:01.520465  675284 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:33:01.520513  675284 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:33:01.520559  675284 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:33:01.597948  675284 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:33:01.598059  675284 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:33:01.598150  675284 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:33:01.606664  675284 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:33:01.612022  675284 out.go:252]   - Generating certificates and keys ...
	I1206 11:33:01.612130  675284 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:33:01.612195  675284 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:33:01.612271  675284 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 11:33:01.612331  675284 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 11:33:01.612400  675284 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 11:33:01.612453  675284 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 11:33:01.612517  675284 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 11:33:01.612577  675284 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 11:33:01.612651  675284 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 11:33:01.612725  675284 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 11:33:01.612769  675284 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 11:33:01.612826  675284 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:33:02.114699  675284 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:33:02.388389  675284 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:33:02.460828  675284 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:33:02.711777  675284 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:33:02.945328  675284 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:33:02.946090  675284 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:33:02.948871  675284 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:33:02.952216  675284 out.go:252]   - Booting up control plane ...
	I1206 11:33:02.952333  675284 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:33:02.952417  675284 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:33:02.952488  675284 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:33:02.968180  675284 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:33:02.968296  675284 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:33:02.977722  675284 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:33:02.978817  675284 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:33:02.979267  675284 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:33:03.119575  675284 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:33:03.119697  675284 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:37:03.117951  675284 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000114184s
	I1206 11:37:03.117990  675284 kubeadm.go:319] 
	I1206 11:37:03.118054  675284 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:37:03.118090  675284 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:37:03.118278  675284 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:37:03.118335  675284 kubeadm.go:319] 
	I1206 11:37:03.118461  675284 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:37:03.118496  675284 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:37:03.118528  675284 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:37:03.118534  675284 kubeadm.go:319] 
	I1206 11:37:03.121777  675284 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:37:03.122201  675284 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:37:03.122312  675284 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:37:03.122582  675284 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1206 11:37:03.122592  675284 kubeadm.go:319] 
	I1206 11:37:03.122662  675284 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:37:03.122729  675284 kubeadm.go:403] duration metric: took 12m9.687967117s to StartCluster
	I1206 11:37:03.122772  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:37:03.122849  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:37:03.150554  675284 cri.go:89] found id: ""
	I1206 11:37:03.150577  675284 logs.go:282] 0 containers: []
	W1206 11:37:03.150585  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:37:03.150592  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:37:03.150651  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:37:03.176593  675284 cri.go:89] found id: ""
	I1206 11:37:03.176620  675284 logs.go:282] 0 containers: []
	W1206 11:37:03.176630  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:37:03.176637  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:37:03.176699  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:37:03.206214  675284 cri.go:89] found id: ""
	I1206 11:37:03.206240  675284 logs.go:282] 0 containers: []
	W1206 11:37:03.206248  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:37:03.206255  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:37:03.206313  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:37:03.231742  675284 cri.go:89] found id: ""
	I1206 11:37:03.231768  675284 logs.go:282] 0 containers: []
	W1206 11:37:03.231776  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:37:03.231783  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:37:03.231842  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:37:03.258845  675284 cri.go:89] found id: ""
	I1206 11:37:03.258868  675284 logs.go:282] 0 containers: []
	W1206 11:37:03.258877  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:37:03.258884  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:37:03.258942  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:37:03.285236  675284 cri.go:89] found id: ""
	I1206 11:37:03.285261  675284 logs.go:282] 0 containers: []
	W1206 11:37:03.285269  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:37:03.285276  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:37:03.285339  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:37:03.311760  675284 cri.go:89] found id: ""
	I1206 11:37:03.311783  675284 logs.go:282] 0 containers: []
	W1206 11:37:03.311791  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:37:03.311798  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:37:03.311860  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:37:03.337843  675284 cri.go:89] found id: ""
	I1206 11:37:03.337934  675284 logs.go:282] 0 containers: []
	W1206 11:37:03.337960  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:37:03.337984  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:37:03.338026  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:37:03.411972  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:37:03.412009  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:37:03.428553  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:37:03.428582  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:37:03.520037  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:37:03.520059  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:37:03.520071  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:37:03.553157  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:37:03.553196  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 11:37:03.581339  675284 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000114184s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 11:37:03.581438  675284 out.go:285] * 
	* 
	W1206 11:37:03.581613  675284 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000114184s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000114184s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:37:03.581633  675284 out.go:285] * 
	* 
	W1206 11:37:03.584726  675284 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:37:03.591337  675284 out.go:203] 
	W1206 11:37:03.594308  675284 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000114184s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000114184s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:37:03.594365  675284 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 11:37:03.594385  675284 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 11:37:03.597633  675284 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-888189 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-888189 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-888189 version --output=json: exit status 1 (91.667061ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-06 11:37:04.062978132 +0000 UTC m=+5180.860401271
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-888189
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-888189:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8c61268106a772f759a692c0b62a7d59e6e3ac31e65d43daa3481cc4db6b48f0",
	        "Created": "2025-12-06T11:24:14.315874715Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 675411,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T11:24:42.399847974Z",
	            "FinishedAt": "2025-12-06T11:24:41.196681111Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/8c61268106a772f759a692c0b62a7d59e6e3ac31e65d43daa3481cc4db6b48f0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8c61268106a772f759a692c0b62a7d59e6e3ac31e65d43daa3481cc4db6b48f0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8c61268106a772f759a692c0b62a7d59e6e3ac31e65d43daa3481cc4db6b48f0/hosts",
	        "LogPath": "/var/lib/docker/containers/8c61268106a772f759a692c0b62a7d59e6e3ac31e65d43daa3481cc4db6b48f0/8c61268106a772f759a692c0b62a7d59e6e3ac31e65d43daa3481cc4db6b48f0-json.log",
	        "Name": "/kubernetes-upgrade-888189",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-888189:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-888189",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8c61268106a772f759a692c0b62a7d59e6e3ac31e65d43daa3481cc4db6b48f0",
	                "LowerDir": "/var/lib/docker/overlay2/a7d369929d3cc711ebe6380b04db60bd806f297380491cc267d10262db0eeef4-init/diff:/var/lib/docker/overlay2/cc06c0f1f442a7275dc247974ca9074508813cfb842de89bc5bb1dae1e824222/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a7d369929d3cc711ebe6380b04db60bd806f297380491cc267d10262db0eeef4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a7d369929d3cc711ebe6380b04db60bd806f297380491cc267d10262db0eeef4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a7d369929d3cc711ebe6380b04db60bd806f297380491cc267d10262db0eeef4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-888189",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-888189/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-888189",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-888189",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-888189",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b65258ff0969af6365388fc4ff68b2f14818d74414f100eb1397b508de5c3213",
	            "SandboxKey": "/var/run/docker/netns/b65258ff0969",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-888189": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:59:30:cd:16:64",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "caa0015c228c8002d482f85f03b1c649a49e0c2b9b986a28441534d5e19c24b6",
	                    "EndpointID": "64e22f1c161f58a540ad6067acc949800136dc14c31a5cddd4c94fd17023fcff",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-888189",
	                        "8c61268106a7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-888189 -n kubernetes-upgrade-888189
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-888189 -n kubernetes-upgrade-888189: exit status 2 (384.280851ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-888189 logs -n 25
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ -p pause-362686 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-362686              │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ delete  │ -p pause-362686                                                                                                                                                                                                           │ pause-362686              │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │ 06 Dec 25 11:22 UTC │
	│ delete  │ -p force-systemd-env-163342                                                                                                                                                                                               │ force-systemd-env-163342  │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │ 06 Dec 25 11:22 UTC │
	│ start   │ -p force-systemd-flag-114030 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-114030 │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │ 06 Dec 25 11:23 UTC │
	│ start   │ -p cert-expiration-378339 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-378339    │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │ 06 Dec 25 11:23 UTC │
	│ ssh     │ force-systemd-flag-114030 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-114030 │ jenkins │ v1.37.0 │ 06 Dec 25 11:23 UTC │ 06 Dec 25 11:23 UTC │
	│ delete  │ -p force-systemd-flag-114030                                                                                                                                                                                              │ force-systemd-flag-114030 │ jenkins │ v1.37.0 │ 06 Dec 25 11:23 UTC │ 06 Dec 25 11:23 UTC │
	│ start   │ -p cert-options-196078 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-196078       │ jenkins │ v1.37.0 │ 06 Dec 25 11:23 UTC │ 06 Dec 25 11:24 UTC │
	│ ssh     │ cert-options-196078 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-196078       │ jenkins │ v1.37.0 │ 06 Dec 25 11:24 UTC │ 06 Dec 25 11:24 UTC │
	│ ssh     │ -p cert-options-196078 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-196078       │ jenkins │ v1.37.0 │ 06 Dec 25 11:24 UTC │ 06 Dec 25 11:24 UTC │
	│ delete  │ -p cert-options-196078                                                                                                                                                                                                    │ cert-options-196078       │ jenkins │ v1.37.0 │ 06 Dec 25 11:24 UTC │ 06 Dec 25 11:24 UTC │
	│ start   │ -p kubernetes-upgrade-888189 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                  │ kubernetes-upgrade-888189 │ jenkins │ v1.37.0 │ 06 Dec 25 11:24 UTC │ 06 Dec 25 11:24 UTC │
	│ stop    │ -p kubernetes-upgrade-888189                                                                                                                                                                                              │ kubernetes-upgrade-888189 │ jenkins │ v1.37.0 │ 06 Dec 25 11:24 UTC │ 06 Dec 25 11:24 UTC │
	│ start   │ -p kubernetes-upgrade-888189 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                           │ kubernetes-upgrade-888189 │ jenkins │ v1.37.0 │ 06 Dec 25 11:24 UTC │                     │
	│ start   │ -p cert-expiration-378339 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                 │ cert-expiration-378339    │ jenkins │ v1.37.0 │ 06 Dec 25 11:26 UTC │ 06 Dec 25 11:26 UTC │
	│ delete  │ -p cert-expiration-378339                                                                                                                                                                                                 │ cert-expiration-378339    │ jenkins │ v1.37.0 │ 06 Dec 25 11:26 UTC │ 06 Dec 25 11:26 UTC │
	│ start   │ -p missing-upgrade-707485 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                         │ missing-upgrade-707485    │ jenkins │ v1.35.0 │ 06 Dec 25 11:26 UTC │ 06 Dec 25 11:27 UTC │
	│ start   │ -p missing-upgrade-707485 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ missing-upgrade-707485    │ jenkins │ v1.37.0 │ 06 Dec 25 11:27 UTC │ 06 Dec 25 11:28 UTC │
	│ delete  │ -p missing-upgrade-707485                                                                                                                                                                                                 │ missing-upgrade-707485    │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │ 06 Dec 25 11:28 UTC │
	│ start   │ -p running-upgrade-684602 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ running-upgrade-684602    │ jenkins │ v1.35.0 │ 06 Dec 25 11:28 UTC │ 06 Dec 25 11:29 UTC │
	│ start   │ -p running-upgrade-684602 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ running-upgrade-684602    │ jenkins │ v1.37.0 │ 06 Dec 25 11:29 UTC │ 06 Dec 25 11:33 UTC │
	│ delete  │ -p running-upgrade-684602                                                                                                                                                                                                 │ running-upgrade-684602    │ jenkins │ v1.37.0 │ 06 Dec 25 11:33 UTC │ 06 Dec 25 11:33 UTC │
	│ start   │ -p stopped-upgrade-468509 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-468509    │ jenkins │ v1.35.0 │ 06 Dec 25 11:33 UTC │ 06 Dec 25 11:34 UTC │
	│ stop    │ stopped-upgrade-468509 stop                                                                                                                                                                                               │ stopped-upgrade-468509    │ jenkins │ v1.35.0 │ 06 Dec 25 11:34 UTC │ 06 Dec 25 11:34 UTC │
	│ start   │ -p stopped-upgrade-468509 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ stopped-upgrade-468509    │ jenkins │ v1.37.0 │ 06 Dec 25 11:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 11:34:25
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 11:34:25.207497  705804 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:34:25.207699  705804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:34:25.207727  705804 out.go:374] Setting ErrFile to fd 2...
	I1206 11:34:25.207748  705804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:34:25.208045  705804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 11:34:25.208453  705804 out.go:368] Setting JSON to false
	I1206 11:34:25.209391  705804 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15417,"bootTime":1765005449,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 11:34:25.209494  705804 start.go:143] virtualization:  
	I1206 11:34:25.214475  705804 out.go:179] * [stopped-upgrade-468509] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:34:25.217661  705804 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 11:34:25.217751  705804 notify.go:221] Checking for updates...
	I1206 11:34:25.223610  705804 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:34:25.226575  705804 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 11:34:25.229555  705804 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 11:34:25.232517  705804 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:34:25.235510  705804 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:34:25.239094  705804 config.go:182] Loaded profile config "stopped-upgrade-468509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1206 11:34:25.242690  705804 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1206 11:34:25.245618  705804 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:34:25.268833  705804 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:34:25.268955  705804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:34:25.329966  705804 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:34:25.320098842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:34:25.330078  705804 docker.go:319] overlay module found
	I1206 11:34:25.333180  705804 out.go:179] * Using the docker driver based on existing profile
	I1206 11:34:25.336012  705804 start.go:309] selected driver: docker
	I1206 11:34:25.336040  705804 start.go:927] validating driver "docker" against &{Name:stopped-upgrade-468509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-468509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:34:25.336155  705804 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:34:25.336892  705804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:34:25.395858  705804 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:34:25.3860989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:34:25.396168  705804 cni.go:84] Creating CNI manager for ""
	I1206 11:34:25.396237  705804 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 11:34:25.396284  705804 start.go:353] cluster config:
	{Name:stopped-upgrade-468509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-468509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:34:25.399524  705804 out.go:179] * Starting "stopped-upgrade-468509" primary control-plane node in "stopped-upgrade-468509" cluster
	I1206 11:34:25.402453  705804 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 11:34:25.405491  705804 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:34:25.408361  705804 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1206 11:34:25.408412  705804 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1206 11:34:25.408425  705804 cache.go:65] Caching tarball of preloaded images
	I1206 11:34:25.408432  705804 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1206 11:34:25.408508  705804 preload.go:238] Found /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 11:34:25.408519  705804 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1206 11:34:25.408626  705804 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/stopped-upgrade-468509/config.json ...
	I1206 11:34:25.426998  705804 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1206 11:34:25.427020  705804 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1206 11:34:25.427038  705804 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:34:25.427077  705804 start.go:360] acquireMachinesLock for stopped-upgrade-468509: {Name:mk6483d8a3ac806e7f7fe8c3534059328c9e0bfa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:34:25.427195  705804 start.go:364] duration metric: took 90.247µs to acquireMachinesLock for "stopped-upgrade-468509"
	I1206 11:34:25.427222  705804 start.go:96] Skipping create...Using existing machine configuration
	I1206 11:34:25.427229  705804 fix.go:54] fixHost starting: 
	I1206 11:34:25.427510  705804 cli_runner.go:164] Run: docker container inspect stopped-upgrade-468509 --format={{.State.Status}}
	I1206 11:34:25.445118  705804 fix.go:112] recreateIfNeeded on stopped-upgrade-468509: state=Stopped err=<nil>
	W1206 11:34:25.445147  705804 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 11:34:25.448406  705804 out.go:252] * Restarting existing docker container for "stopped-upgrade-468509" ...
	I1206 11:34:25.448508  705804 cli_runner.go:164] Run: docker start stopped-upgrade-468509
	I1206 11:34:25.747408  705804 cli_runner.go:164] Run: docker container inspect stopped-upgrade-468509 --format={{.State.Status}}
	I1206 11:34:25.771081  705804 kic.go:430] container "stopped-upgrade-468509" state is running.
	I1206 11:34:25.771588  705804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-468509
	I1206 11:34:25.791455  705804 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/stopped-upgrade-468509/config.json ...
	I1206 11:34:25.791671  705804 machine.go:94] provisionDockerMachine start ...
	I1206 11:34:25.791728  705804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-468509
	I1206 11:34:25.811750  705804 main.go:143] libmachine: Using SSH client type: native
	I1206 11:34:25.812068  705804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1206 11:34:25.812077  705804 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:34:25.813045  705804 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 11:34:28.939568  705804 main.go:143] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-468509
	
	I1206 11:34:28.939596  705804 ubuntu.go:182] provisioning hostname "stopped-upgrade-468509"
	I1206 11:34:28.939672  705804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-468509
	I1206 11:34:28.960090  705804 main.go:143] libmachine: Using SSH client type: native
	I1206 11:34:28.960403  705804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1206 11:34:28.960419  705804 main.go:143] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-468509 && echo "stopped-upgrade-468509" | sudo tee /etc/hostname
	I1206 11:34:29.100412  705804 main.go:143] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-468509
	
	I1206 11:34:29.100487  705804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-468509
	I1206 11:34:29.118186  705804 main.go:143] libmachine: Using SSH client type: native
	I1206 11:34:29.118519  705804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1206 11:34:29.118536  705804 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-468509' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-468509/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-468509' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:34:29.247426  705804 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:34:29.247454  705804 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-484819/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-484819/.minikube}
	I1206 11:34:29.247476  705804 ubuntu.go:190] setting up certificates
	I1206 11:34:29.247486  705804 provision.go:84] configureAuth start
	I1206 11:34:29.247565  705804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-468509
	I1206 11:34:29.265109  705804 provision.go:143] copyHostCerts
	I1206 11:34:29.265185  705804 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem, removing ...
	I1206 11:34:29.265200  705804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 11:34:29.265278  705804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem (1082 bytes)
	I1206 11:34:29.265390  705804 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem, removing ...
	I1206 11:34:29.265401  705804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 11:34:29.265427  705804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem (1123 bytes)
	I1206 11:34:29.265497  705804 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem, removing ...
	I1206 11:34:29.265506  705804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 11:34:29.265531  705804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem (1675 bytes)
	I1206 11:34:29.265591  705804 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-468509 san=[127.0.0.1 192.168.85.2 localhost minikube stopped-upgrade-468509]
	I1206 11:34:30.006989  705804 provision.go:177] copyRemoteCerts
	I1206 11:34:30.007081  705804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:34:30.007156  705804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-468509
	I1206 11:34:30.036315  705804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/stopped-upgrade-468509/id_rsa Username:docker}
	I1206 11:34:30.153098  705804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1206 11:34:30.183672  705804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 11:34:30.223014  705804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:34:30.262122  705804 provision.go:87] duration metric: took 1.014609925s to configureAuth
	I1206 11:34:30.262153  705804 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:34:30.262387  705804 config.go:182] Loaded profile config "stopped-upgrade-468509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1206 11:34:30.262514  705804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-468509
	I1206 11:34:30.280520  705804 main.go:143] libmachine: Using SSH client type: native
	I1206 11:34:30.280841  705804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1206 11:34:30.280860  705804 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 11:34:30.574369  705804 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 11:34:30.574389  705804 machine.go:97] duration metric: took 4.782709843s to provisionDockerMachine
	I1206 11:34:30.574401  705804 start.go:293] postStartSetup for "stopped-upgrade-468509" (driver="docker")
	I1206 11:34:30.574413  705804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:34:30.574490  705804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:34:30.574530  705804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-468509
	I1206 11:34:30.592149  705804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/stopped-upgrade-468509/id_rsa Username:docker}
	I1206 11:34:30.684523  705804 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:34:30.687802  705804 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:34:30.687831  705804 main.go:143] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1206 11:34:30.687840  705804 main.go:143] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1206 11:34:30.687847  705804 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1206 11:34:30.687858  705804 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/addons for local assets ...
	I1206 11:34:30.687910  705804 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/files for local assets ...
	I1206 11:34:30.687997  705804 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> 4880682.pem in /etc/ssl/certs
	I1206 11:34:30.688121  705804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:34:30.697115  705804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 11:34:30.722726  705804 start.go:296] duration metric: took 148.308095ms for postStartSetup
	I1206 11:34:30.722805  705804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:34:30.722845  705804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-468509
	I1206 11:34:30.740056  705804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/stopped-upgrade-468509/id_rsa Username:docker}
	I1206 11:34:30.828291  705804 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:34:30.833026  705804 fix.go:56] duration metric: took 5.405789137s for fixHost
	I1206 11:34:30.833051  705804 start.go:83] releasing machines lock for "stopped-upgrade-468509", held for 5.405841583s
	I1206 11:34:30.833148  705804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-468509
	I1206 11:34:30.850026  705804 ssh_runner.go:195] Run: cat /version.json
	I1206 11:34:30.850077  705804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-468509
	I1206 11:34:30.850111  705804 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:34:30.850187  705804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-468509
	I1206 11:34:30.876431  705804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/stopped-upgrade-468509/id_rsa Username:docker}
	I1206 11:34:30.878143  705804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/stopped-upgrade-468509/id_rsa Username:docker}
	W1206 11:34:31.091266  705804 out.go:285] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.35.0 -> Actual minikube version: v1.37.0
	I1206 11:34:31.091382  705804 ssh_runner.go:195] Run: systemctl --version
	I1206 11:34:31.096615  705804 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 11:34:31.240118  705804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 11:34:31.247492  705804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:34:31.256925  705804 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1206 11:34:31.257002  705804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:34:31.266426  705804 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 11:34:31.266506  705804 start.go:496] detecting cgroup driver to use...
	I1206 11:34:31.266548  705804 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:34:31.266612  705804 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 11:34:31.279501  705804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 11:34:31.291293  705804 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:34:31.291366  705804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:34:31.305290  705804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:34:31.317283  705804 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:34:31.406181  705804 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:34:31.489756  705804 docker.go:234] disabling docker service ...
	I1206 11:34:31.489824  705804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:34:31.502812  705804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:34:31.515306  705804 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:34:31.606783  705804 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:34:31.695799  705804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:34:31.709647  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:34:31.728974  705804 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1206 11:34:31.729053  705804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:34:31.740502  705804 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 11:34:31.740592  705804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:34:31.751857  705804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:34:31.762635  705804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:34:31.772818  705804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:34:31.782343  705804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:34:31.793673  705804 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:34:31.804015  705804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:34:31.814123  705804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:34:31.823081  705804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:34:31.831760  705804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:34:31.912482  705804 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 11:34:32.035551  705804 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 11:34:32.035636  705804 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 11:34:32.039372  705804 start.go:564] Will wait 60s for crictl version
	I1206 11:34:32.039449  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:34:32.043210  705804 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 11:34:32.087268  705804 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1206 11:34:32.087366  705804 ssh_runner.go:195] Run: crio --version
	I1206 11:34:32.135942  705804 ssh_runner.go:195] Run: crio --version
	I1206 11:34:32.182378  705804 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.24.6 ...
	I1206 11:34:32.185107  705804 cli_runner.go:164] Run: docker network inspect stopped-upgrade-468509 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:34:32.201341  705804 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 11:34:32.205216  705804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:34:32.216201  705804 kubeadm.go:884] updating cluster {Name:stopped-upgrade-468509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-468509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:34:32.216314  705804 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1206 11:34:32.216374  705804 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:34:32.260522  705804 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 11:34:32.260547  705804 crio.go:433] Images already preloaded, skipping extraction
	I1206 11:34:32.260601  705804 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:34:32.300115  705804 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 11:34:32.300141  705804 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:34:32.300149  705804 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.32.0 crio true true} ...
	I1206 11:34:32.300250  705804 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=stopped-upgrade-468509 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-468509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:34:32.300327  705804 ssh_runner.go:195] Run: crio config
	I1206 11:34:32.353310  705804 cni.go:84] Creating CNI manager for ""
	I1206 11:34:32.353332  705804 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 11:34:32.353361  705804 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 11:34:32.353384  705804 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-468509 NodeName:stopped-upgrade-468509 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:34:32.353518  705804 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-468509"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:34:32.353600  705804 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1206 11:34:32.362614  705804 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:34:32.362714  705804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:34:32.371424  705804 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1206 11:34:32.389980  705804 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 11:34:32.408782  705804 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1206 11:34:32.429488  705804 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:34:32.434669  705804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:34:32.450047  705804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:34:32.553337  705804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:34:32.567869  705804 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/stopped-upgrade-468509 for IP: 192.168.85.2
	I1206 11:34:32.567941  705804 certs.go:195] generating shared ca certs ...
	I1206 11:34:32.567974  705804 certs.go:227] acquiring lock for ca certs: {Name:mk654f77abd8383620ce6ddae56f2a6a8c1d96d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:34:32.568155  705804 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key
	I1206 11:34:32.568244  705804 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key
	I1206 11:34:32.568279  705804 certs.go:257] generating profile certs ...
	I1206 11:34:32.568397  705804 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/stopped-upgrade-468509/client.key
	I1206 11:34:32.568521  705804 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/stopped-upgrade-468509/apiserver.key.4d31ef87
	I1206 11:34:32.568606  705804 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/stopped-upgrade-468509/proxy-client.key
	I1206 11:34:32.568762  705804 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem (1338 bytes)
	W1206 11:34:32.568829  705804 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068_empty.pem, impossibly tiny 0 bytes
	I1206 11:34:32.568862  705804 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 11:34:32.568921  705804 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:34:32.568980  705804 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:34:32.569027  705804 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem (1675 bytes)
	I1206 11:34:32.569104  705804 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 11:34:32.569734  705804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:34:32.607765  705804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:34:32.636263  705804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:34:32.668665  705804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 11:34:32.707397  705804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/stopped-upgrade-468509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1206 11:34:32.757480  705804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/stopped-upgrade-468509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 11:34:32.786766  705804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/stopped-upgrade-468509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:34:32.812229  705804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/stopped-upgrade-468509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 11:34:32.838684  705804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem --> /usr/share/ca-certificates/488068.pem (1338 bytes)
	I1206 11:34:32.865383  705804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /usr/share/ca-certificates/4880682.pem (1708 bytes)
	I1206 11:34:32.891262  705804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:34:32.920932  705804 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:34:32.940205  705804 ssh_runner.go:195] Run: openssl version
	I1206 11:34:32.946056  705804 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4880682.pem
	I1206 11:34:32.955808  705804 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4880682.pem /etc/ssl/certs/4880682.pem
	I1206 11:34:32.964942  705804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4880682.pem
	I1206 11:34:32.975453  705804 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 11:34:32.975578  705804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4880682.pem
	I1206 11:34:32.983032  705804 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:34:32.992257  705804 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:34:33.001473  705804 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:34:33.013579  705804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:34:33.017960  705804 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:34:33.018052  705804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:34:33.026293  705804 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:34:33.036146  705804 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488068.pem
	I1206 11:34:33.045369  705804 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488068.pem /etc/ssl/certs/488068.pem
	I1206 11:34:33.055463  705804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488068.pem
	I1206 11:34:33.059622  705804 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 11:34:33.059691  705804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488068.pem
	I1206 11:34:33.067493  705804 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:34:33.076682  705804 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:34:33.080655  705804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 11:34:33.089101  705804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 11:34:33.096928  705804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 11:34:33.104935  705804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 11:34:33.112601  705804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 11:34:33.120236  705804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 11:34:33.128025  705804 kubeadm.go:401] StartCluster: {Name:stopped-upgrade-468509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-468509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:34:33.128113  705804 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 11:34:33.128183  705804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:34:33.182196  705804 cri.go:89] found id: ""
	I1206 11:34:33.182349  705804 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:34:33.203223  705804 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 11:34:33.203241  705804 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 11:34:33.203298  705804 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 11:34:33.253166  705804 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:34:33.253715  705804 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-468509" does not appear in /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 11:34:33.253946  705804 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-484819/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-468509" cluster setting kubeconfig missing "stopped-upgrade-468509" context setting]
	I1206 11:34:33.254397  705804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/kubeconfig: {Name:mk884a72161ed5cd0cfdbffc4a21f277282d705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:34:33.255098  705804 kapi.go:59] client config for stopped-upgrade-468509: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/stopped-upgrade-468509/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/stopped-upgrade-468509/client.key", CAFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 11:34:33.255859  705804 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 11:34:33.255883  705804 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 11:34:33.255891  705804 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 11:34:33.255897  705804 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 11:34:33.255901  705804 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 11:34:33.256201  705804 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 11:34:33.276290  705804 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 11:34:05.228867751 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 11:34:32.420293690 +0000
	@@ -41,9 +41,6 @@
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      - name: "proxy-refresh-interval"
	-        value: "70000"
	 kubernetesVersion: v1.32.0
	 networking:
	   dnsDomain: cluster.local
	
	-- /stdout --
	I1206 11:34:33.276315  705804 kubeadm.go:1161] stopping kube-system containers ...
	I1206 11:34:33.276328  705804 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 11:34:33.276388  705804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:34:33.391685  705804 cri.go:89] found id: ""
	I1206 11:34:33.391755  705804 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 11:34:33.470229  705804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:34:33.479873  705804 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5647 Dec  6 11:34 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec  6 11:34 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec  6 11:34 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec  6 11:34 /etc/kubernetes/scheduler.conf
	
	I1206 11:34:33.479968  705804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:34:33.490055  705804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:34:33.502058  705804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:34:33.511514  705804 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:34:33.511615  705804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:34:33.520527  705804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:34:33.529526  705804 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:34:33.529590  705804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:34:33.538747  705804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 11:34:33.548379  705804 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 11:34:33.596129  705804 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 11:34:36.994810  705804 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.39864602s)
	I1206 11:34:36.994884  705804 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 11:34:37.168915  705804 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 11:34:37.237258  705804 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 11:34:37.324777  705804 api_server.go:52] waiting for apiserver process to appear ...
	I1206 11:34:37.324856  705804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:34:37.825824  705804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:34:38.325823  705804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:34:38.351399  705804 api_server.go:72] duration metric: took 1.026621133s to wait for apiserver process to appear ...
	I1206 11:34:38.351426  705804 api_server.go:88] waiting for apiserver healthz status ...
	I1206 11:34:38.351445  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:34:43.352251  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1206 11:34:43.352353  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:34:48.354957  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1206 11:34:48.355002  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:34:53.356182  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1206 11:34:53.356227  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:34:58.359229  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1206 11:34:58.359273  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:34:59.155504  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:50830->192.168.85.2:8443: read: connection reset by peer
	I1206 11:34:59.155551  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:34:59.155853  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:34:59.352269  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:34:59.352729  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:34:59.852218  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:04.852556  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1206 11:35:04.852601  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:09.855881  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1206 11:35:09.855947  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:14.857384  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1206 11:35:14.857465  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:19.857787  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1206 11:35:19.857834  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:22.417539  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:32812->192.168.85.2:8443: read: connection reset by peer
	I1206 11:35:22.417583  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:22.417928  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:22.851501  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:22.851972  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:23.351841  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:23.352364  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:23.851621  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:23.852164  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:24.351609  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:24.352100  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:24.851740  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:24.852211  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:25.351976  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:25.352429  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:25.852158  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:25.852590  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:26.351929  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:26.352359  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:26.851640  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:26.852099  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:27.351629  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:27.352070  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:27.851597  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:27.852138  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:28.351787  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:28.352242  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:28.851874  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:28.852306  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:29.351938  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:29.352400  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:29.852131  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:29.852597  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:30.352060  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:30.352541  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:30.852217  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:30.852644  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:31.352261  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:31.352700  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:31.852343  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:31.852779  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:32.352499  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:32.352931  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:32.851607  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:32.852070  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:33.351795  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:33.352201  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:33.851864  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:33.852335  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:34.351968  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:34.352410  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:34.852146  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:34.852605  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:35.352078  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:35.352583  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:35.852309  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:35.852778  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:36.352505  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:36.352942  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:36.851562  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:36.852024  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:37.351587  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:37.351976  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:37.851595  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:37.852060  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:35:38.351638  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:35:38.351729  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:35:38.410228  705804 cri.go:89] found id: "839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8"
	I1206 11:35:38.410248  705804 cri.go:89] found id: ""
	I1206 11:35:38.410255  705804 logs.go:282] 1 containers: [839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8]
	I1206 11:35:38.410311  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:35:38.414438  705804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:35:38.414511  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:35:38.473308  705804 cri.go:89] found id: "6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:35:38.473329  705804 cri.go:89] found id: ""
	I1206 11:35:38.473337  705804 logs.go:282] 1 containers: [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673]
	I1206 11:35:38.473392  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:35:38.485811  705804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:35:38.485883  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:35:38.532328  705804 cri.go:89] found id: ""
	I1206 11:35:38.532350  705804 logs.go:282] 0 containers: []
	W1206 11:35:38.532358  705804 logs.go:284] No container was found matching "coredns"
	I1206 11:35:38.532364  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:35:38.532423  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:35:38.581407  705804 cri.go:89] found id: "229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:35:38.581427  705804 cri.go:89] found id: "e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:35:38.581431  705804 cri.go:89] found id: ""
	I1206 11:35:38.581439  705804 logs.go:282] 2 containers: [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808]
	I1206 11:35:38.581494  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:35:38.585420  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:35:38.588998  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:35:38.589068  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:35:38.636410  705804 cri.go:89] found id: ""
	I1206 11:35:38.636432  705804 logs.go:282] 0 containers: []
	W1206 11:35:38.636441  705804 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:35:38.636447  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:35:38.636507  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:35:38.692312  705804 cri.go:89] found id: "0702d22c66ec7e9a0d16be3009c05af6f1096a7dbc3e434f662b2ae5159b91c1"
	I1206 11:35:38.692333  705804 cri.go:89] found id: ""
	I1206 11:35:38.692341  705804 logs.go:282] 1 containers: [0702d22c66ec7e9a0d16be3009c05af6f1096a7dbc3e434f662b2ae5159b91c1]
	I1206 11:35:38.692398  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:35:38.696506  705804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:35:38.696574  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:35:38.745475  705804 cri.go:89] found id: ""
	I1206 11:35:38.745556  705804 logs.go:282] 0 containers: []
	W1206 11:35:38.745579  705804 logs.go:284] No container was found matching "kindnet"
	I1206 11:35:38.745597  705804 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:35:38.745690  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:35:38.796794  705804 cri.go:89] found id: ""
	I1206 11:35:38.796817  705804 logs.go:282] 0 containers: []
	W1206 11:35:38.796826  705804 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:35:38.796841  705804 logs.go:123] Gathering logs for kubelet ...
	I1206 11:35:38.796853  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:35:38.891699  705804 logs.go:123] Gathering logs for dmesg ...
	I1206 11:35:38.891781  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:35:38.910091  705804 logs.go:123] Gathering logs for kube-apiserver [839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8] ...
	I1206 11:35:38.910167  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8"
	I1206 11:35:38.970836  705804 logs.go:123] Gathering logs for kube-scheduler [e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808] ...
	I1206 11:35:38.970929  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:35:39.022140  705804 logs.go:123] Gathering logs for kube-controller-manager [0702d22c66ec7e9a0d16be3009c05af6f1096a7dbc3e434f662b2ae5159b91c1] ...
	I1206 11:35:39.022222  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0702d22c66ec7e9a0d16be3009c05af6f1096a7dbc3e434f662b2ae5159b91c1"
	I1206 11:35:39.071519  705804 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:35:39.071598  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 11:35:49.148790  705804 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.077156194s)
	W1206 11:35:49.148822  705804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1206 11:35:49.148831  705804 logs.go:123] Gathering logs for etcd [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673] ...
	I1206 11:35:49.148841  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:35:49.205881  705804 logs.go:123] Gathering logs for kube-scheduler [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5] ...
	I1206 11:35:49.205912  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:35:49.282022  705804 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:35:49.282061  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:35:49.325540  705804 logs.go:123] Gathering logs for container status ...
	I1206 11:35:49.325577  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:35:51.868264  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:35:56.868902  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1206 11:35:56.868966  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:35:56.869033  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:35:56.913801  705804 cri.go:89] found id: "c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:35:56.913823  705804 cri.go:89] found id: "839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8"
	I1206 11:35:56.913828  705804 cri.go:89] found id: ""
	I1206 11:35:56.913835  705804 logs.go:282] 2 containers: [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb 839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8]
	I1206 11:35:56.913891  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:35:56.917649  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:35:56.921181  705804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:35:56.921268  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:35:56.959248  705804 cri.go:89] found id: "6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:35:56.959273  705804 cri.go:89] found id: ""
	I1206 11:35:56.959281  705804 logs.go:282] 1 containers: [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673]
	I1206 11:35:56.959339  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:35:56.962845  705804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:35:56.962918  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:35:56.999147  705804 cri.go:89] found id: ""
	I1206 11:35:56.999172  705804 logs.go:282] 0 containers: []
	W1206 11:35:56.999181  705804 logs.go:284] No container was found matching "coredns"
	I1206 11:35:56.999188  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:35:56.999252  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:35:57.039437  705804 cri.go:89] found id: "229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:35:57.039463  705804 cri.go:89] found id: "e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:35:57.039468  705804 cri.go:89] found id: ""
	I1206 11:35:57.039476  705804 logs.go:282] 2 containers: [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808]
	I1206 11:35:57.039533  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:35:57.043356  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:35:57.046854  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:35:57.046955  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:35:57.084836  705804 cri.go:89] found id: ""
	I1206 11:35:57.084859  705804 logs.go:282] 0 containers: []
	W1206 11:35:57.084868  705804 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:35:57.084874  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:35:57.084935  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:35:57.123618  705804 cri.go:89] found id: "04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:35:57.123642  705804 cri.go:89] found id: "0702d22c66ec7e9a0d16be3009c05af6f1096a7dbc3e434f662b2ae5159b91c1"
	I1206 11:35:57.123653  705804 cri.go:89] found id: ""
	I1206 11:35:57.123662  705804 logs.go:282] 2 containers: [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f 0702d22c66ec7e9a0d16be3009c05af6f1096a7dbc3e434f662b2ae5159b91c1]
	I1206 11:35:57.123721  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:35:57.127307  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:35:57.130633  705804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:35:57.130703  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:35:57.166489  705804 cri.go:89] found id: ""
	I1206 11:35:57.166511  705804 logs.go:282] 0 containers: []
	W1206 11:35:57.166519  705804 logs.go:284] No container was found matching "kindnet"
	I1206 11:35:57.166525  705804 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:35:57.166583  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:35:57.211192  705804 cri.go:89] found id: ""
	I1206 11:35:57.211219  705804 logs.go:282] 0 containers: []
	W1206 11:35:57.211229  705804 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:35:57.211239  705804 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:35:57.211272  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 11:36:00.625909  705804 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (3.414607613s)
	W1206 11:36:00.625949  705804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:46990->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:46990->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1206 11:36:00.625957  705804 logs.go:123] Gathering logs for kube-scheduler [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5] ...
	I1206 11:36:00.625968  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:00.703347  705804 logs.go:123] Gathering logs for kube-scheduler [e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808] ...
	I1206 11:36:00.703382  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:00.741148  705804 logs.go:123] Gathering logs for kube-controller-manager [0702d22c66ec7e9a0d16be3009c05af6f1096a7dbc3e434f662b2ae5159b91c1] ...
	I1206 11:36:00.741175  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0702d22c66ec7e9a0d16be3009c05af6f1096a7dbc3e434f662b2ae5159b91c1"
	I1206 11:36:00.782631  705804 logs.go:123] Gathering logs for kubelet ...
	I1206 11:36:00.782656  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:36:00.876713  705804 logs.go:123] Gathering logs for dmesg ...
	I1206 11:36:00.876752  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:36:00.894941  705804 logs.go:123] Gathering logs for kube-apiserver [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb] ...
	I1206 11:36:00.894970  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:00.941811  705804 logs.go:123] Gathering logs for kube-apiserver [839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8] ...
	I1206 11:36:00.941843  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8"
	W1206 11:36:00.977374  705804 logs.go:130] failed kube-apiserver [839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:36:00.974258    1556 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8\": container with ID starting with 839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8 not found: ID does not exist" containerID="839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8"
	time="2025-12-06T11:36:00Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8\": container with ID starting with 839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1206 11:36:00.974258    1556 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8\": container with ID starting with 839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8 not found: ID does not exist" containerID="839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8"
	time="2025-12-06T11:36:00Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8\": container with ID starting with 839b2080308fff0435e35ed62e0dd18373a801dcbc2b9adf3ca72105c89907a8 not found: ID does not exist"
	
	** /stderr **
	I1206 11:36:00.977412  705804 logs.go:123] Gathering logs for etcd [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673] ...
	I1206 11:36:00.977425  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:01.015711  705804 logs.go:123] Gathering logs for kube-controller-manager [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f] ...
	I1206 11:36:01.015743  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:01.054712  705804 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:36:01.054741  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:36:01.097501  705804 logs.go:123] Gathering logs for container status ...
	I1206 11:36:01.097538  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:36:03.643458  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:36:03.643981  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:36:03.644031  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:36:03.644099  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:36:03.680322  705804 cri.go:89] found id: "c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:03.680344  705804 cri.go:89] found id: ""
	I1206 11:36:03.680352  705804 logs.go:282] 1 containers: [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb]
	I1206 11:36:03.680428  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:03.684173  705804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:36:03.684246  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:36:03.726260  705804 cri.go:89] found id: "6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:03.726284  705804 cri.go:89] found id: ""
	I1206 11:36:03.726293  705804 logs.go:282] 1 containers: [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673]
	I1206 11:36:03.726351  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:03.729985  705804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:36:03.730110  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:36:03.766697  705804 cri.go:89] found id: ""
	I1206 11:36:03.766722  705804 logs.go:282] 0 containers: []
	W1206 11:36:03.766732  705804 logs.go:284] No container was found matching "coredns"
	I1206 11:36:03.766739  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:36:03.766799  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:36:03.804607  705804 cri.go:89] found id: "229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:03.804629  705804 cri.go:89] found id: "e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:03.804634  705804 cri.go:89] found id: ""
	I1206 11:36:03.804642  705804 logs.go:282] 2 containers: [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808]
	I1206 11:36:03.804699  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:03.808385  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:03.811938  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:36:03.812021  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:36:03.851222  705804 cri.go:89] found id: ""
	I1206 11:36:03.851247  705804 logs.go:282] 0 containers: []
	W1206 11:36:03.851257  705804 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:36:03.851263  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:36:03.851364  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:36:03.896582  705804 cri.go:89] found id: "04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:03.896607  705804 cri.go:89] found id: "0702d22c66ec7e9a0d16be3009c05af6f1096a7dbc3e434f662b2ae5159b91c1"
	I1206 11:36:03.896612  705804 cri.go:89] found id: ""
	I1206 11:36:03.896619  705804 logs.go:282] 2 containers: [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f 0702d22c66ec7e9a0d16be3009c05af6f1096a7dbc3e434f662b2ae5159b91c1]
	I1206 11:36:03.896700  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:03.900495  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:03.904149  705804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:36:03.904223  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:36:03.946415  705804 cri.go:89] found id: ""
	I1206 11:36:03.946488  705804 logs.go:282] 0 containers: []
	W1206 11:36:03.946509  705804 logs.go:284] No container was found matching "kindnet"
	I1206 11:36:03.946532  705804 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:36:03.946651  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:36:03.982596  705804 cri.go:89] found id: ""
	I1206 11:36:03.982671  705804 logs.go:282] 0 containers: []
	W1206 11:36:03.982693  705804 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:36:03.982719  705804 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:36:03.982762  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:36:04.025849  705804 logs.go:123] Gathering logs for kube-apiserver [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb] ...
	I1206 11:36:04.025888  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:04.071966  705804 logs.go:123] Gathering logs for etcd [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673] ...
	I1206 11:36:04.071999  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:04.111541  705804 logs.go:123] Gathering logs for kube-scheduler [e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808] ...
	I1206 11:36:04.111570  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:04.148067  705804 logs.go:123] Gathering logs for kube-controller-manager [0702d22c66ec7e9a0d16be3009c05af6f1096a7dbc3e434f662b2ae5159b91c1] ...
	I1206 11:36:04.148096  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0702d22c66ec7e9a0d16be3009c05af6f1096a7dbc3e434f662b2ae5159b91c1"
	I1206 11:36:04.197629  705804 logs.go:123] Gathering logs for container status ...
	I1206 11:36:04.197666  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:36:04.260459  705804 logs.go:123] Gathering logs for kubelet ...
	I1206 11:36:04.260491  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:36:04.356996  705804 logs.go:123] Gathering logs for dmesg ...
	I1206 11:36:04.357033  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:36:04.376096  705804 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:36:04.376125  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:36:04.444730  705804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:36:04.444748  705804 logs.go:123] Gathering logs for kube-scheduler [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5] ...
	I1206 11:36:04.444763  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:04.533489  705804 logs.go:123] Gathering logs for kube-controller-manager [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f] ...
	I1206 11:36:04.533529  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:07.080129  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:36:07.080648  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:36:07.080698  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:36:07.080754  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:36:07.124259  705804 cri.go:89] found id: "c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:07.124287  705804 cri.go:89] found id: ""
	I1206 11:36:07.124296  705804 logs.go:282] 1 containers: [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb]
	I1206 11:36:07.124355  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:07.128113  705804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:36:07.128195  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:36:07.171337  705804 cri.go:89] found id: "6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:07.171360  705804 cri.go:89] found id: ""
	I1206 11:36:07.171368  705804 logs.go:282] 1 containers: [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673]
	I1206 11:36:07.171428  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:07.175246  705804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:36:07.175326  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:36:07.234937  705804 cri.go:89] found id: ""
	I1206 11:36:07.234965  705804 logs.go:282] 0 containers: []
	W1206 11:36:07.234974  705804 logs.go:284] No container was found matching "coredns"
	I1206 11:36:07.234980  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:36:07.235041  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:36:07.280876  705804 cri.go:89] found id: "229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:07.280909  705804 cri.go:89] found id: "e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:07.280914  705804 cri.go:89] found id: ""
	I1206 11:36:07.280922  705804 logs.go:282] 2 containers: [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808]
	I1206 11:36:07.280980  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:07.284990  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:07.289241  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:36:07.289310  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:36:07.326419  705804 cri.go:89] found id: ""
	I1206 11:36:07.326445  705804 logs.go:282] 0 containers: []
	W1206 11:36:07.326454  705804 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:36:07.326461  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:36:07.326524  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:36:07.368875  705804 cri.go:89] found id: "04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:07.368900  705804 cri.go:89] found id: "0702d22c66ec7e9a0d16be3009c05af6f1096a7dbc3e434f662b2ae5159b91c1"
	I1206 11:36:07.368905  705804 cri.go:89] found id: ""
	I1206 11:36:07.368913  705804 logs.go:282] 2 containers: [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f 0702d22c66ec7e9a0d16be3009c05af6f1096a7dbc3e434f662b2ae5159b91c1]
	I1206 11:36:07.368986  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:07.373151  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:07.376741  705804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:36:07.376836  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:36:07.416869  705804 cri.go:89] found id: ""
	I1206 11:36:07.416895  705804 logs.go:282] 0 containers: []
	W1206 11:36:07.416904  705804 logs.go:284] No container was found matching "kindnet"
	I1206 11:36:07.416910  705804 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:36:07.417017  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:36:07.453575  705804 cri.go:89] found id: ""
	I1206 11:36:07.453641  705804 logs.go:282] 0 containers: []
	W1206 11:36:07.453668  705804 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:36:07.453689  705804 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:36:07.453707  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:36:07.496245  705804 logs.go:123] Gathering logs for dmesg ...
	I1206 11:36:07.496280  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:36:07.514758  705804 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:36:07.514786  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:36:07.589109  705804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:36:07.589128  705804 logs.go:123] Gathering logs for kube-scheduler [e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808] ...
	I1206 11:36:07.589141  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:07.625044  705804 logs.go:123] Gathering logs for kube-controller-manager [0702d22c66ec7e9a0d16be3009c05af6f1096a7dbc3e434f662b2ae5159b91c1] ...
	I1206 11:36:07.625114  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0702d22c66ec7e9a0d16be3009c05af6f1096a7dbc3e434f662b2ae5159b91c1"
	I1206 11:36:07.663062  705804 logs.go:123] Gathering logs for container status ...
	I1206 11:36:07.663091  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:36:07.719182  705804 logs.go:123] Gathering logs for kubelet ...
	I1206 11:36:07.719211  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:36:07.818210  705804 logs.go:123] Gathering logs for kube-apiserver [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb] ...
	I1206 11:36:07.818249  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:07.864469  705804 logs.go:123] Gathering logs for etcd [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673] ...
	I1206 11:36:07.864504  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:07.900330  705804 logs.go:123] Gathering logs for kube-scheduler [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5] ...
	I1206 11:36:07.900359  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:07.985515  705804 logs.go:123] Gathering logs for kube-controller-manager [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f] ...
	I1206 11:36:07.985554  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:10.532032  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:36:10.532525  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:36:10.532575  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:36:10.532648  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:36:10.574034  705804 cri.go:89] found id: "c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:10.574057  705804 cri.go:89] found id: ""
	I1206 11:36:10.574065  705804 logs.go:282] 1 containers: [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb]
	I1206 11:36:10.574122  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:10.578070  705804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:36:10.578143  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:36:10.615264  705804 cri.go:89] found id: "6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:10.615289  705804 cri.go:89] found id: ""
	I1206 11:36:10.615297  705804 logs.go:282] 1 containers: [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673]
	I1206 11:36:10.615355  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:10.619295  705804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:36:10.619365  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:36:10.662085  705804 cri.go:89] found id: ""
	I1206 11:36:10.662109  705804 logs.go:282] 0 containers: []
	W1206 11:36:10.662118  705804 logs.go:284] No container was found matching "coredns"
	I1206 11:36:10.662124  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:36:10.662180  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:36:10.699731  705804 cri.go:89] found id: "229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:10.699797  705804 cri.go:89] found id: "e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:10.699808  705804 cri.go:89] found id: ""
	I1206 11:36:10.699817  705804 logs.go:282] 2 containers: [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808]
	I1206 11:36:10.699882  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:10.703606  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:10.707199  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:36:10.707284  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:36:10.743181  705804 cri.go:89] found id: ""
	I1206 11:36:10.743207  705804 logs.go:282] 0 containers: []
	W1206 11:36:10.743216  705804 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:36:10.743223  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:36:10.743293  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:36:10.783756  705804 cri.go:89] found id: "04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:10.783780  705804 cri.go:89] found id: "0702d22c66ec7e9a0d16be3009c05af6f1096a7dbc3e434f662b2ae5159b91c1"
	I1206 11:36:10.783785  705804 cri.go:89] found id: ""
	I1206 11:36:10.783793  705804 logs.go:282] 2 containers: [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f 0702d22c66ec7e9a0d16be3009c05af6f1096a7dbc3e434f662b2ae5159b91c1]
	I1206 11:36:10.783855  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:10.787482  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:10.791027  705804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:36:10.791097  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:36:10.835780  705804 cri.go:89] found id: ""
	I1206 11:36:10.835803  705804 logs.go:282] 0 containers: []
	W1206 11:36:10.835811  705804 logs.go:284] No container was found matching "kindnet"
	I1206 11:36:10.835817  705804 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:36:10.835878  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:36:10.875276  705804 cri.go:89] found id: ""
	I1206 11:36:10.875304  705804 logs.go:282] 0 containers: []
	W1206 11:36:10.875313  705804 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:36:10.875323  705804 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:36:10.875334  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:36:10.956245  705804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:36:10.956266  705804 logs.go:123] Gathering logs for kube-apiserver [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb] ...
	I1206 11:36:10.956279  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:11.004962  705804 logs.go:123] Gathering logs for etcd [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673] ...
	I1206 11:36:11.005002  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:11.045631  705804 logs.go:123] Gathering logs for kube-scheduler [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5] ...
	I1206 11:36:11.045661  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:11.120345  705804 logs.go:123] Gathering logs for kube-scheduler [e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808] ...
	I1206 11:36:11.120378  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:11.157043  705804 logs.go:123] Gathering logs for kube-controller-manager [0702d22c66ec7e9a0d16be3009c05af6f1096a7dbc3e434f662b2ae5159b91c1] ...
	I1206 11:36:11.157073  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0702d22c66ec7e9a0d16be3009c05af6f1096a7dbc3e434f662b2ae5159b91c1"
	I1206 11:36:11.198432  705804 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:36:11.198462  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:36:11.242527  705804 logs.go:123] Gathering logs for kubelet ...
	I1206 11:36:11.242560  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:36:11.348850  705804 logs.go:123] Gathering logs for dmesg ...
	I1206 11:36:11.348891  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:36:11.367857  705804 logs.go:123] Gathering logs for kube-controller-manager [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f] ...
	I1206 11:36:11.367887  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:11.406325  705804 logs.go:123] Gathering logs for container status ...
	I1206 11:36:11.406353  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:36:13.954510  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:36:13.954952  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:36:13.955005  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:36:13.955082  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:36:14.011825  705804 cri.go:89] found id: "c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:14.011861  705804 cri.go:89] found id: ""
	I1206 11:36:14.011871  705804 logs.go:282] 1 containers: [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb]
	I1206 11:36:14.011951  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:14.016009  705804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:36:14.016088  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:36:14.056897  705804 cri.go:89] found id: "6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:14.056930  705804 cri.go:89] found id: ""
	I1206 11:36:14.056938  705804 logs.go:282] 1 containers: [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673]
	I1206 11:36:14.057005  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:14.060681  705804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:36:14.060753  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:36:14.100284  705804 cri.go:89] found id: ""
	I1206 11:36:14.100321  705804 logs.go:282] 0 containers: []
	W1206 11:36:14.100330  705804 logs.go:284] No container was found matching "coredns"
	I1206 11:36:14.100337  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:36:14.100405  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:36:14.137723  705804 cri.go:89] found id: "229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:14.137745  705804 cri.go:89] found id: "e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:14.137750  705804 cri.go:89] found id: ""
	I1206 11:36:14.137758  705804 logs.go:282] 2 containers: [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808]
	I1206 11:36:14.137825  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:14.141594  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:14.145102  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:36:14.145190  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:36:14.183117  705804 cri.go:89] found id: ""
	I1206 11:36:14.183163  705804 logs.go:282] 0 containers: []
	W1206 11:36:14.183172  705804 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:36:14.183179  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:36:14.183238  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:36:14.228439  705804 cri.go:89] found id: "04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:14.228458  705804 cri.go:89] found id: ""
	I1206 11:36:14.228467  705804 logs.go:282] 1 containers: [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f]
	I1206 11:36:14.228529  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:14.232496  705804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:36:14.232567  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:36:14.273125  705804 cri.go:89] found id: ""
	I1206 11:36:14.273150  705804 logs.go:282] 0 containers: []
	W1206 11:36:14.273160  705804 logs.go:284] No container was found matching "kindnet"
	I1206 11:36:14.273167  705804 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:36:14.273240  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:36:14.312435  705804 cri.go:89] found id: ""
	I1206 11:36:14.312466  705804 logs.go:282] 0 containers: []
	W1206 11:36:14.312475  705804 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:36:14.312489  705804 logs.go:123] Gathering logs for etcd [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673] ...
	I1206 11:36:14.312519  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:14.350307  705804 logs.go:123] Gathering logs for kube-controller-manager [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f] ...
	I1206 11:36:14.350336  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:14.390537  705804 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:36:14.390565  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:36:14.433155  705804 logs.go:123] Gathering logs for container status ...
	I1206 11:36:14.433199  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:36:14.477265  705804 logs.go:123] Gathering logs for kubelet ...
	I1206 11:36:14.477295  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:36:14.577397  705804 logs.go:123] Gathering logs for kube-apiserver [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb] ...
	I1206 11:36:14.577437  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:14.621359  705804 logs.go:123] Gathering logs for kube-scheduler [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5] ...
	I1206 11:36:14.621390  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:14.723017  705804 logs.go:123] Gathering logs for kube-scheduler [e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808] ...
	I1206 11:36:14.723113  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:14.771103  705804 logs.go:123] Gathering logs for dmesg ...
	I1206 11:36:14.771173  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:36:14.789689  705804 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:36:14.789718  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:36:14.861369  705804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:36:17.361546  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:36:17.362053  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:36:17.362121  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:36:17.362195  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:36:17.399583  705804 cri.go:89] found id: "c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:17.399611  705804 cri.go:89] found id: ""
	I1206 11:36:17.399620  705804 logs.go:282] 1 containers: [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb]
	I1206 11:36:17.399677  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:17.403324  705804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:36:17.403393  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:36:17.440243  705804 cri.go:89] found id: "6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:17.440266  705804 cri.go:89] found id: ""
	I1206 11:36:17.440275  705804 logs.go:282] 1 containers: [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673]
	I1206 11:36:17.440334  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:17.444146  705804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:36:17.444221  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:36:17.487663  705804 cri.go:89] found id: ""
	I1206 11:36:17.487689  705804 logs.go:282] 0 containers: []
	W1206 11:36:17.487699  705804 logs.go:284] No container was found matching "coredns"
	I1206 11:36:17.487738  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:36:17.487806  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:36:17.525863  705804 cri.go:89] found id: "229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:17.525884  705804 cri.go:89] found id: "e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:17.525890  705804 cri.go:89] found id: ""
	I1206 11:36:17.525897  705804 logs.go:282] 2 containers: [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808]
	I1206 11:36:17.525957  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:17.529581  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:17.533004  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:36:17.533092  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:36:17.571610  705804 cri.go:89] found id: ""
	I1206 11:36:17.571635  705804 logs.go:282] 0 containers: []
	W1206 11:36:17.571644  705804 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:36:17.571651  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:36:17.571711  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:36:17.610646  705804 cri.go:89] found id: "04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:17.610669  705804 cri.go:89] found id: ""
	I1206 11:36:17.610677  705804 logs.go:282] 1 containers: [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f]
	I1206 11:36:17.610734  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:17.614486  705804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:36:17.614569  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:36:17.653272  705804 cri.go:89] found id: ""
	I1206 11:36:17.653297  705804 logs.go:282] 0 containers: []
	W1206 11:36:17.653306  705804 logs.go:284] No container was found matching "kindnet"
	I1206 11:36:17.653312  705804 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:36:17.653392  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:36:17.696767  705804 cri.go:89] found id: ""
	I1206 11:36:17.696804  705804 logs.go:282] 0 containers: []
	W1206 11:36:17.696814  705804 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:36:17.696829  705804 logs.go:123] Gathering logs for kubelet ...
	I1206 11:36:17.696841  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:36:17.810000  705804 logs.go:123] Gathering logs for dmesg ...
	I1206 11:36:17.810038  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:36:17.828178  705804 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:36:17.828207  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:36:17.900787  705804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:36:17.900813  705804 logs.go:123] Gathering logs for etcd [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673] ...
	I1206 11:36:17.900827  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:17.938664  705804 logs.go:123] Gathering logs for kube-scheduler [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5] ...
	I1206 11:36:17.938695  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:18.027877  705804 logs.go:123] Gathering logs for kube-scheduler [e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808] ...
	I1206 11:36:18.027917  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:18.067504  705804 logs.go:123] Gathering logs for kube-apiserver [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb] ...
	I1206 11:36:18.067536  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:18.111264  705804 logs.go:123] Gathering logs for kube-controller-manager [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f] ...
	I1206 11:36:18.111295  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:18.148773  705804 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:36:18.148803  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:36:18.192267  705804 logs.go:123] Gathering logs for container status ...
	I1206 11:36:18.192348  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:36:20.740181  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:36:20.740669  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:36:20.740727  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:36:20.740814  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:36:20.777799  705804 cri.go:89] found id: "c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:20.777822  705804 cri.go:89] found id: ""
	I1206 11:36:20.777830  705804 logs.go:282] 1 containers: [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb]
	I1206 11:36:20.777895  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:20.781590  705804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:36:20.781666  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:36:20.820446  705804 cri.go:89] found id: "6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:20.820468  705804 cri.go:89] found id: ""
	I1206 11:36:20.820477  705804 logs.go:282] 1 containers: [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673]
	I1206 11:36:20.820533  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:20.824298  705804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:36:20.824383  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:36:20.862482  705804 cri.go:89] found id: ""
	I1206 11:36:20.862548  705804 logs.go:282] 0 containers: []
	W1206 11:36:20.862575  705804 logs.go:284] No container was found matching "coredns"
	I1206 11:36:20.862594  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:36:20.862679  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:36:20.904401  705804 cri.go:89] found id: "229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:20.904423  705804 cri.go:89] found id: "e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:20.904429  705804 cri.go:89] found id: ""
	I1206 11:36:20.904436  705804 logs.go:282] 2 containers: [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808]
	I1206 11:36:20.904516  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:20.908145  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:20.911965  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:36:20.912073  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:36:20.948455  705804 cri.go:89] found id: ""
	I1206 11:36:20.948480  705804 logs.go:282] 0 containers: []
	W1206 11:36:20.948489  705804 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:36:20.948496  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:36:20.948555  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:36:20.987097  705804 cri.go:89] found id: "04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:20.987143  705804 cri.go:89] found id: ""
	I1206 11:36:20.987151  705804 logs.go:282] 1 containers: [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f]
	I1206 11:36:20.987206  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:20.990767  705804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:36:20.990836  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:36:21.036865  705804 cri.go:89] found id: ""
	I1206 11:36:21.036890  705804 logs.go:282] 0 containers: []
	W1206 11:36:21.036899  705804 logs.go:284] No container was found matching "kindnet"
	I1206 11:36:21.036911  705804 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:36:21.036970  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:36:21.073667  705804 cri.go:89] found id: ""
	I1206 11:36:21.073691  705804 logs.go:282] 0 containers: []
	W1206 11:36:21.073700  705804 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:36:21.073714  705804 logs.go:123] Gathering logs for container status ...
	I1206 11:36:21.073744  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:36:21.116784  705804 logs.go:123] Gathering logs for kubelet ...
	I1206 11:36:21.116810  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:36:21.222897  705804 logs.go:123] Gathering logs for etcd [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673] ...
	I1206 11:36:21.222934  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:21.260779  705804 logs.go:123] Gathering logs for kube-scheduler [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5] ...
	I1206 11:36:21.260868  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:21.339161  705804 logs.go:123] Gathering logs for kube-controller-manager [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f] ...
	I1206 11:36:21.339194  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:21.378931  705804 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:36:21.378959  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:36:21.421699  705804 logs.go:123] Gathering logs for dmesg ...
	I1206 11:36:21.421734  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:36:21.446810  705804 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:36:21.446838  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:36:21.526813  705804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:36:21.526834  705804 logs.go:123] Gathering logs for kube-apiserver [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb] ...
	I1206 11:36:21.526847  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:21.566944  705804 logs.go:123] Gathering logs for kube-scheduler [e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808] ...
	I1206 11:36:21.566973  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:24.103512  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:36:24.104017  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:36:24.104069  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:36:24.104128  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:36:24.142538  705804 cri.go:89] found id: "c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:24.142558  705804 cri.go:89] found id: ""
	I1206 11:36:24.142566  705804 logs.go:282] 1 containers: [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb]
	I1206 11:36:24.142622  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:24.146233  705804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:36:24.146309  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:36:24.182943  705804 cri.go:89] found id: "6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:24.182967  705804 cri.go:89] found id: ""
	I1206 11:36:24.182975  705804 logs.go:282] 1 containers: [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673]
	I1206 11:36:24.183031  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:24.186921  705804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:36:24.187018  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:36:24.224852  705804 cri.go:89] found id: ""
	I1206 11:36:24.224881  705804 logs.go:282] 0 containers: []
	W1206 11:36:24.224891  705804 logs.go:284] No container was found matching "coredns"
	I1206 11:36:24.224897  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:36:24.224956  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:36:24.264320  705804 cri.go:89] found id: "229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:24.264343  705804 cri.go:89] found id: "e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:24.264348  705804 cri.go:89] found id: ""
	I1206 11:36:24.264355  705804 logs.go:282] 2 containers: [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808]
	I1206 11:36:24.264443  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:24.268965  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:24.272970  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:36:24.273047  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:36:24.313104  705804 cri.go:89] found id: ""
	I1206 11:36:24.313129  705804 logs.go:282] 0 containers: []
	W1206 11:36:24.313139  705804 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:36:24.313145  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:36:24.313206  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:36:24.349352  705804 cri.go:89] found id: "04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:24.349380  705804 cri.go:89] found id: ""
	I1206 11:36:24.349389  705804 logs.go:282] 1 containers: [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f]
	I1206 11:36:24.349445  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:24.353191  705804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:36:24.353266  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:36:24.389999  705804 cri.go:89] found id: ""
	I1206 11:36:24.390071  705804 logs.go:282] 0 containers: []
	W1206 11:36:24.390095  705804 logs.go:284] No container was found matching "kindnet"
	I1206 11:36:24.390115  705804 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:36:24.390200  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:36:24.429160  705804 cri.go:89] found id: ""
	I1206 11:36:24.429185  705804 logs.go:282] 0 containers: []
	W1206 11:36:24.429195  705804 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:36:24.429210  705804 logs.go:123] Gathering logs for kube-scheduler [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5] ...
	I1206 11:36:24.429222  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:24.530231  705804 logs.go:123] Gathering logs for kube-scheduler [e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808] ...
	I1206 11:36:24.530267  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:24.568697  705804 logs.go:123] Gathering logs for container status ...
	I1206 11:36:24.568728  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:36:24.613180  705804 logs.go:123] Gathering logs for kubelet ...
	I1206 11:36:24.613207  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:36:24.726620  705804 logs.go:123] Gathering logs for etcd [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673] ...
	I1206 11:36:24.726657  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:24.766186  705804 logs.go:123] Gathering logs for kube-controller-manager [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f] ...
	I1206 11:36:24.766214  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:24.803649  705804 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:36:24.803675  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:36:24.847646  705804 logs.go:123] Gathering logs for dmesg ...
	I1206 11:36:24.847680  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:36:24.865486  705804 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:36:24.865515  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:36:24.937695  705804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:36:24.937715  705804 logs.go:123] Gathering logs for kube-apiserver [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb] ...
	I1206 11:36:24.937728  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:27.487242  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:36:27.487704  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:36:27.487770  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:36:27.487835  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:36:27.526895  705804 cri.go:89] found id: "c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:27.526917  705804 cri.go:89] found id: ""
	I1206 11:36:27.526926  705804 logs.go:282] 1 containers: [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb]
	I1206 11:36:27.526986  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:27.530757  705804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:36:27.530834  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:36:27.567544  705804 cri.go:89] found id: "6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:27.567565  705804 cri.go:89] found id: ""
	I1206 11:36:27.567572  705804 logs.go:282] 1 containers: [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673]
	I1206 11:36:27.567627  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:27.571384  705804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:36:27.571457  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:36:27.607282  705804 cri.go:89] found id: ""
	I1206 11:36:27.607309  705804 logs.go:282] 0 containers: []
	W1206 11:36:27.607318  705804 logs.go:284] No container was found matching "coredns"
	I1206 11:36:27.607325  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:36:27.607385  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:36:27.647176  705804 cri.go:89] found id: "229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:27.647197  705804 cri.go:89] found id: "e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:27.647202  705804 cri.go:89] found id: ""
	I1206 11:36:27.647210  705804 logs.go:282] 2 containers: [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808]
	I1206 11:36:27.647269  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:27.650910  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:27.654704  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:36:27.654776  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:36:27.692690  705804 cri.go:89] found id: ""
	I1206 11:36:27.692716  705804 logs.go:282] 0 containers: []
	W1206 11:36:27.692726  705804 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:36:27.692733  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:36:27.692818  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:36:27.730085  705804 cri.go:89] found id: "04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:27.730109  705804 cri.go:89] found id: ""
	I1206 11:36:27.730118  705804 logs.go:282] 1 containers: [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f]
	I1206 11:36:27.730175  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:27.733906  705804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:36:27.733982  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:36:27.772093  705804 cri.go:89] found id: ""
	I1206 11:36:27.772118  705804 logs.go:282] 0 containers: []
	W1206 11:36:27.772128  705804 logs.go:284] No container was found matching "kindnet"
	I1206 11:36:27.772135  705804 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:36:27.772194  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:36:27.810962  705804 cri.go:89] found id: ""
	I1206 11:36:27.810986  705804 logs.go:282] 0 containers: []
	W1206 11:36:27.810995  705804 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:36:27.811008  705804 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:36:27.811019  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:36:27.880752  705804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:36:27.880772  705804 logs.go:123] Gathering logs for kube-scheduler [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5] ...
	I1206 11:36:27.880787  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:27.962810  705804 logs.go:123] Gathering logs for kube-controller-manager [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f] ...
	I1206 11:36:27.962847  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:28.002117  705804 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:36:28.002148  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:36:28.051824  705804 logs.go:123] Gathering logs for kubelet ...
	I1206 11:36:28.051862  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:36:28.166035  705804 logs.go:123] Gathering logs for kube-apiserver [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb] ...
	I1206 11:36:28.166072  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:28.240603  705804 logs.go:123] Gathering logs for etcd [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673] ...
	I1206 11:36:28.240652  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:28.288015  705804 logs.go:123] Gathering logs for kube-scheduler [e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808] ...
	I1206 11:36:28.288043  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:28.327152  705804 logs.go:123] Gathering logs for container status ...
	I1206 11:36:28.327184  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:36:28.370648  705804 logs.go:123] Gathering logs for dmesg ...
	I1206 11:36:28.370674  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:36:30.888492  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:36:30.888898  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:36:30.888948  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:36:30.889007  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:36:30.932544  705804 cri.go:89] found id: "c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:30.932612  705804 cri.go:89] found id: ""
	I1206 11:36:30.932629  705804 logs.go:282] 1 containers: [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb]
	I1206 11:36:30.932704  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:30.936626  705804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:36:30.936749  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:36:30.972747  705804 cri.go:89] found id: "6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:30.972769  705804 cri.go:89] found id: ""
	I1206 11:36:30.972777  705804 logs.go:282] 1 containers: [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673]
	I1206 11:36:30.972884  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:30.976553  705804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:36:30.976676  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:36:31.014457  705804 cri.go:89] found id: ""
	I1206 11:36:31.014483  705804 logs.go:282] 0 containers: []
	W1206 11:36:31.014493  705804 logs.go:284] No container was found matching "coredns"
	I1206 11:36:31.014499  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:36:31.014580  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:36:31.052400  705804 cri.go:89] found id: "229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:31.052422  705804 cri.go:89] found id: "e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:31.052428  705804 cri.go:89] found id: ""
	I1206 11:36:31.052436  705804 logs.go:282] 2 containers: [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808]
	I1206 11:36:31.052493  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:31.056266  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:31.060006  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:36:31.060082  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:36:31.098405  705804 cri.go:89] found id: ""
	I1206 11:36:31.098439  705804 logs.go:282] 0 containers: []
	W1206 11:36:31.098450  705804 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:36:31.098458  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:36:31.098551  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:36:31.139056  705804 cri.go:89] found id: "04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:31.139082  705804 cri.go:89] found id: ""
	I1206 11:36:31.139091  705804 logs.go:282] 1 containers: [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f]
	I1206 11:36:31.139175  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:31.143023  705804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:36:31.143106  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:36:31.194227  705804 cri.go:89] found id: ""
	I1206 11:36:31.194253  705804 logs.go:282] 0 containers: []
	W1206 11:36:31.194262  705804 logs.go:284] No container was found matching "kindnet"
	I1206 11:36:31.194269  705804 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:36:31.194340  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:36:31.246069  705804 cri.go:89] found id: ""
	I1206 11:36:31.246092  705804 logs.go:282] 0 containers: []
	W1206 11:36:31.246102  705804 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:36:31.246119  705804 logs.go:123] Gathering logs for etcd [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673] ...
	I1206 11:36:31.246132  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:31.283154  705804 logs.go:123] Gathering logs for kube-scheduler [e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808] ...
	I1206 11:36:31.283185  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:31.321958  705804 logs.go:123] Gathering logs for kube-controller-manager [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f] ...
	I1206 11:36:31.321987  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:31.368311  705804 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:36:31.368336  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:36:31.429241  705804 logs.go:123] Gathering logs for kubelet ...
	I1206 11:36:31.429326  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:36:31.555418  705804 logs.go:123] Gathering logs for dmesg ...
	I1206 11:36:31.555455  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:36:31.578090  705804 logs.go:123] Gathering logs for kube-scheduler [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5] ...
	I1206 11:36:31.578118  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:31.667518  705804 logs.go:123] Gathering logs for container status ...
	I1206 11:36:31.667553  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:36:31.721710  705804 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:36:31.721742  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:36:31.814398  705804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:36:31.814428  705804 logs.go:123] Gathering logs for kube-apiserver [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb] ...
	I1206 11:36:31.814441  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:34.360889  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:36:39.362505  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1206 11:36:39.362565  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:36:39.362640  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:36:39.403243  705804 cri.go:89] found id: "0c256ac3ddce72be409809c0b31750fc099210446b826eb055e6be1d25a0274b"
	I1206 11:36:39.403308  705804 cri.go:89] found id: "c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:39.403326  705804 cri.go:89] found id: ""
	I1206 11:36:39.403350  705804 logs.go:282] 2 containers: [0c256ac3ddce72be409809c0b31750fc099210446b826eb055e6be1d25a0274b c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb]
	I1206 11:36:39.403442  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:39.407153  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:39.410428  705804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:36:39.410497  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:36:39.457392  705804 cri.go:89] found id: "6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:39.457411  705804 cri.go:89] found id: ""
	I1206 11:36:39.457419  705804 logs.go:282] 1 containers: [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673]
	I1206 11:36:39.457472  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:39.461312  705804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:36:39.461385  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:36:39.507978  705804 cri.go:89] found id: ""
	I1206 11:36:39.508004  705804 logs.go:282] 0 containers: []
	W1206 11:36:39.508014  705804 logs.go:284] No container was found matching "coredns"
	I1206 11:36:39.508020  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:36:39.508080  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:36:39.544591  705804 cri.go:89] found id: "229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:39.544615  705804 cri.go:89] found id: "e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:39.544620  705804 cri.go:89] found id: ""
	I1206 11:36:39.544628  705804 logs.go:282] 2 containers: [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808]
	I1206 11:36:39.544697  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:39.548462  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:39.551908  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:36:39.552008  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:36:39.592118  705804 cri.go:89] found id: ""
	I1206 11:36:39.592144  705804 logs.go:282] 0 containers: []
	W1206 11:36:39.592153  705804 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:36:39.592160  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:36:39.592239  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:36:39.631929  705804 cri.go:89] found id: "367106ca5622dd6c1d1655445a5a02ee3e30471f1f21dedee6c8b0290cd5cffd"
	I1206 11:36:39.631952  705804 cri.go:89] found id: "04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:39.631960  705804 cri.go:89] found id: ""
	I1206 11:36:39.631968  705804 logs.go:282] 2 containers: [367106ca5622dd6c1d1655445a5a02ee3e30471f1f21dedee6c8b0290cd5cffd 04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f]
	I1206 11:36:39.632045  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:39.635837  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:39.639442  705804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:36:39.639517  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:36:39.677228  705804 cri.go:89] found id: ""
	I1206 11:36:39.677253  705804 logs.go:282] 0 containers: []
	W1206 11:36:39.677263  705804 logs.go:284] No container was found matching "kindnet"
	I1206 11:36:39.677269  705804 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:36:39.677328  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:36:39.721673  705804 cri.go:89] found id: ""
	I1206 11:36:39.721700  705804 logs.go:282] 0 containers: []
	W1206 11:36:39.721711  705804 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:36:39.721721  705804 logs.go:123] Gathering logs for dmesg ...
	I1206 11:36:39.721733  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:36:39.739736  705804 logs.go:123] Gathering logs for kube-apiserver [0c256ac3ddce72be409809c0b31750fc099210446b826eb055e6be1d25a0274b] ...
	I1206 11:36:39.739767  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c256ac3ddce72be409809c0b31750fc099210446b826eb055e6be1d25a0274b"
	I1206 11:36:39.782309  705804 logs.go:123] Gathering logs for kube-controller-manager [367106ca5622dd6c1d1655445a5a02ee3e30471f1f21dedee6c8b0290cd5cffd] ...
	I1206 11:36:39.782341  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367106ca5622dd6c1d1655445a5a02ee3e30471f1f21dedee6c8b0290cd5cffd"
	I1206 11:36:39.823440  705804 logs.go:123] Gathering logs for container status ...
	I1206 11:36:39.823468  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:36:39.865145  705804 logs.go:123] Gathering logs for kubelet ...
	I1206 11:36:39.865175  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:36:39.973833  705804 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:36:39.973870  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 11:36:50.054400  705804 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.080504895s)
	W1206 11:36:50.054448  705804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1206 11:36:50.054456  705804 logs.go:123] Gathering logs for kube-apiserver [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb] ...
	I1206 11:36:50.054472  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:50.105548  705804 logs.go:123] Gathering logs for etcd [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673] ...
	I1206 11:36:50.105582  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:50.144871  705804 logs.go:123] Gathering logs for kube-scheduler [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5] ...
	I1206 11:36:50.144901  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:50.230147  705804 logs.go:123] Gathering logs for kube-scheduler [e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808] ...
	I1206 11:36:50.230190  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:50.269906  705804 logs.go:123] Gathering logs for kube-controller-manager [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f] ...
	I1206 11:36:50.269990  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:50.306872  705804 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:36:50.306903  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:36:52.858620  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:36:54.559664  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:50012->192.168.85.2:8443: read: connection reset by peer
	I1206 11:36:54.559727  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:36:54.559792  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:36:54.598505  705804 cri.go:89] found id: "0c256ac3ddce72be409809c0b31750fc099210446b826eb055e6be1d25a0274b"
	I1206 11:36:54.598527  705804 cri.go:89] found id: "c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	I1206 11:36:54.598532  705804 cri.go:89] found id: ""
	I1206 11:36:54.598540  705804 logs.go:282] 2 containers: [0c256ac3ddce72be409809c0b31750fc099210446b826eb055e6be1d25a0274b c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb]
	I1206 11:36:54.598596  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:54.602052  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:54.605315  705804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:36:54.605379  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:36:54.640121  705804 cri.go:89] found id: "6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:54.640143  705804 cri.go:89] found id: ""
	I1206 11:36:54.640152  705804 logs.go:282] 1 containers: [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673]
	I1206 11:36:54.640206  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:54.643814  705804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:36:54.643888  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:36:54.696449  705804 cri.go:89] found id: ""
	I1206 11:36:54.696474  705804 logs.go:282] 0 containers: []
	W1206 11:36:54.696483  705804 logs.go:284] No container was found matching "coredns"
	I1206 11:36:54.696489  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:36:54.696549  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:36:54.738010  705804 cri.go:89] found id: "229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:54.738035  705804 cri.go:89] found id: "e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:54.738040  705804 cri.go:89] found id: ""
	I1206 11:36:54.738048  705804 logs.go:282] 2 containers: [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808]
	I1206 11:36:54.738109  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:54.741913  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:54.745182  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:36:54.745254  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:36:54.783005  705804 cri.go:89] found id: ""
	I1206 11:36:54.783027  705804 logs.go:282] 0 containers: []
	W1206 11:36:54.783038  705804 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:36:54.783045  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:36:54.783103  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:36:54.828036  705804 cri.go:89] found id: "367106ca5622dd6c1d1655445a5a02ee3e30471f1f21dedee6c8b0290cd5cffd"
	I1206 11:36:54.828056  705804 cri.go:89] found id: "04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:54.828061  705804 cri.go:89] found id: ""
	I1206 11:36:54.828068  705804 logs.go:282] 2 containers: [367106ca5622dd6c1d1655445a5a02ee3e30471f1f21dedee6c8b0290cd5cffd 04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f]
	I1206 11:36:54.828122  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:54.831833  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:54.835217  705804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:36:54.835292  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:36:54.875037  705804 cri.go:89] found id: ""
	I1206 11:36:54.875062  705804 logs.go:282] 0 containers: []
	W1206 11:36:54.875071  705804 logs.go:284] No container was found matching "kindnet"
	I1206 11:36:54.875077  705804 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:36:54.875152  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:36:54.912164  705804 cri.go:89] found id: ""
	I1206 11:36:54.912240  705804 logs.go:282] 0 containers: []
	W1206 11:36:54.912257  705804 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:36:54.912269  705804 logs.go:123] Gathering logs for kubelet ...
	I1206 11:36:54.912284  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:36:55.025482  705804 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:36:55.025529  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:36:55.103489  705804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:36:55.103509  705804 logs.go:123] Gathering logs for kube-scheduler [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5] ...
	I1206 11:36:55.103523  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:55.187470  705804 logs.go:123] Gathering logs for kube-scheduler [e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808] ...
	I1206 11:36:55.187518  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:55.239431  705804 logs.go:123] Gathering logs for kube-controller-manager [04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f] ...
	I1206 11:36:55.239515  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04603ded590e9c0abdbde3cf2991357b56510fb8e3bb16e4ae5bece8f6dccc3f"
	I1206 11:36:55.278410  705804 logs.go:123] Gathering logs for container status ...
	I1206 11:36:55.278474  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:36:55.324223  705804 logs.go:123] Gathering logs for dmesg ...
	I1206 11:36:55.324252  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:36:55.342421  705804 logs.go:123] Gathering logs for kube-apiserver [0c256ac3ddce72be409809c0b31750fc099210446b826eb055e6be1d25a0274b] ...
	I1206 11:36:55.342514  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c256ac3ddce72be409809c0b31750fc099210446b826eb055e6be1d25a0274b"
	I1206 11:36:55.393370  705804 logs.go:123] Gathering logs for kube-apiserver [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb] ...
	I1206 11:36:55.393403  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	W1206 11:36:55.429725  705804 logs.go:130] failed kube-apiserver [c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:36:55.426301    3009 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb\": container with ID starting with c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb not found: ID does not exist" containerID="c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	time="2025-12-06T11:36:55Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb\": container with ID starting with c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb not found: ID does not exist"
	 output: 
	** stderr ** 
	E1206 11:36:55.426301    3009 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb\": container with ID starting with c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb not found: ID does not exist" containerID="c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb"
	time="2025-12-06T11:36:55Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb\": container with ID starting with c3a2811a9868f3a7f3cb1f8e42305f498fe94316c5ccb309eb78f270f540f4cb not found: ID does not exist"
	
	** /stderr **
	I1206 11:36:55.429748  705804 logs.go:123] Gathering logs for etcd [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673] ...
	I1206 11:36:55.429763  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:55.470919  705804 logs.go:123] Gathering logs for kube-controller-manager [367106ca5622dd6c1d1655445a5a02ee3e30471f1f21dedee6c8b0290cd5cffd] ...
	I1206 11:36:55.470997  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367106ca5622dd6c1d1655445a5a02ee3e30471f1f21dedee6c8b0290cd5cffd"
	I1206 11:36:55.512091  705804 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:36:55.512120  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:36:58.065132  705804 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:36:58.065594  705804 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 11:36:58.065661  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:36:58.065734  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:36:58.104763  705804 cri.go:89] found id: "0c256ac3ddce72be409809c0b31750fc099210446b826eb055e6be1d25a0274b"
	I1206 11:36:58.104785  705804 cri.go:89] found id: ""
	I1206 11:36:58.104794  705804 logs.go:282] 1 containers: [0c256ac3ddce72be409809c0b31750fc099210446b826eb055e6be1d25a0274b]
	I1206 11:36:58.104860  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:58.108465  705804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:36:58.108546  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:36:58.145279  705804 cri.go:89] found id: "6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:36:58.145303  705804 cri.go:89] found id: ""
	I1206 11:36:58.145312  705804 logs.go:282] 1 containers: [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673]
	I1206 11:36:58.145368  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:58.148950  705804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:36:58.149031  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:36:58.202287  705804 cri.go:89] found id: ""
	I1206 11:36:58.202313  705804 logs.go:282] 0 containers: []
	W1206 11:36:58.202322  705804 logs.go:284] No container was found matching "coredns"
	I1206 11:36:58.202328  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:36:58.202386  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:36:58.249498  705804 cri.go:89] found id: "229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:58.249519  705804 cri.go:89] found id: "e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:58.249525  705804 cri.go:89] found id: ""
	I1206 11:36:58.249533  705804 logs.go:282] 2 containers: [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808]
	I1206 11:36:58.249590  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:58.253221  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:58.256635  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:36:58.256703  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:36:58.297414  705804 cri.go:89] found id: ""
	I1206 11:36:58.297441  705804 logs.go:282] 0 containers: []
	W1206 11:36:58.297462  705804 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:36:58.297489  705804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:36:58.297562  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:36:58.335307  705804 cri.go:89] found id: "367106ca5622dd6c1d1655445a5a02ee3e30471f1f21dedee6c8b0290cd5cffd"
	I1206 11:36:58.335334  705804 cri.go:89] found id: ""
	I1206 11:36:58.335354  705804 logs.go:282] 1 containers: [367106ca5622dd6c1d1655445a5a02ee3e30471f1f21dedee6c8b0290cd5cffd]
	I1206 11:36:58.335447  705804 ssh_runner.go:195] Run: which crictl
	I1206 11:36:58.339048  705804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:36:58.339161  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:36:58.382380  705804 cri.go:89] found id: ""
	I1206 11:36:58.382414  705804 logs.go:282] 0 containers: []
	W1206 11:36:58.382430  705804 logs.go:284] No container was found matching "kindnet"
	I1206 11:36:58.382438  705804 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:36:58.382538  705804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:36:58.418858  705804 cri.go:89] found id: ""
	I1206 11:36:58.418883  705804 logs.go:282] 0 containers: []
	W1206 11:36:58.418903  705804 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:36:58.418936  705804 logs.go:123] Gathering logs for kubelet ...
	I1206 11:36:58.418956  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:36:58.539758  705804 logs.go:123] Gathering logs for dmesg ...
	I1206 11:36:58.539796  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:36:58.558009  705804 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:36:58.558039  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:36:58.634771  705804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:36:58.634798  705804 logs.go:123] Gathering logs for kube-scheduler [229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5] ...
	I1206 11:36:58.634812  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229427b81a91b6296a8e4fa218bfbb69a4bc4b45e959c7358756cc05cc88bab5"
	I1206 11:36:58.714852  705804 logs.go:123] Gathering logs for kube-scheduler [e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808] ...
	I1206 11:36:58.714887  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7bfd9bd07d3b5d1cb46672505b7d1f1708a4e121aac180bc709838de1948808"
	I1206 11:36:58.751029  705804 logs.go:123] Gathering logs for kube-controller-manager [367106ca5622dd6c1d1655445a5a02ee3e30471f1f21dedee6c8b0290cd5cffd] ...
	I1206 11:36:58.751100  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367106ca5622dd6c1d1655445a5a02ee3e30471f1f21dedee6c8b0290cd5cffd"
	I1206 11:36:58.788070  705804 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:36:58.788100  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:36:58.841608  705804 logs.go:123] Gathering logs for container status ...
	I1206 11:36:58.841642  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:36:58.881766  705804 logs.go:123] Gathering logs for kube-apiserver [0c256ac3ddce72be409809c0b31750fc099210446b826eb055e6be1d25a0274b] ...
	I1206 11:36:58.881797  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c256ac3ddce72be409809c0b31750fc099210446b826eb055e6be1d25a0274b"
	I1206 11:36:58.921705  705804 logs.go:123] Gathering logs for etcd [6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673] ...
	I1206 11:36:58.921739  705804 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e091440a908a7c16ded0b7bdbdb82abf088f1e9a32307497e637e1fa3375673"
	I1206 11:37:03.117951  675284 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000114184s
	I1206 11:37:03.117990  675284 kubeadm.go:319] 
	I1206 11:37:03.118054  675284 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:37:03.118090  675284 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:37:03.118278  675284 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:37:03.118335  675284 kubeadm.go:319] 
	I1206 11:37:03.118461  675284 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:37:03.118496  675284 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:37:03.118528  675284 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:37:03.118534  675284 kubeadm.go:319] 
	I1206 11:37:03.121777  675284 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:37:03.122201  675284 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:37:03.122312  675284 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:37:03.122582  675284 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1206 11:37:03.122592  675284 kubeadm.go:319] 
	I1206 11:37:03.122662  675284 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:37:03.122729  675284 kubeadm.go:403] duration metric: took 12m9.687967117s to StartCluster
	I1206 11:37:03.122772  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:37:03.122849  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:37:03.150554  675284 cri.go:89] found id: ""
	I1206 11:37:03.150577  675284 logs.go:282] 0 containers: []
	W1206 11:37:03.150585  675284 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:37:03.150592  675284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:37:03.150651  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:37:03.176593  675284 cri.go:89] found id: ""
	I1206 11:37:03.176620  675284 logs.go:282] 0 containers: []
	W1206 11:37:03.176630  675284 logs.go:284] No container was found matching "etcd"
	I1206 11:37:03.176637  675284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:37:03.176699  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:37:03.206214  675284 cri.go:89] found id: ""
	I1206 11:37:03.206240  675284 logs.go:282] 0 containers: []
	W1206 11:37:03.206248  675284 logs.go:284] No container was found matching "coredns"
	I1206 11:37:03.206255  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:37:03.206313  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:37:03.231742  675284 cri.go:89] found id: ""
	I1206 11:37:03.231768  675284 logs.go:282] 0 containers: []
	W1206 11:37:03.231776  675284 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:37:03.231783  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:37:03.231842  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:37:03.258845  675284 cri.go:89] found id: ""
	I1206 11:37:03.258868  675284 logs.go:282] 0 containers: []
	W1206 11:37:03.258877  675284 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:37:03.258884  675284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:37:03.258942  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:37:03.285236  675284 cri.go:89] found id: ""
	I1206 11:37:03.285261  675284 logs.go:282] 0 containers: []
	W1206 11:37:03.285269  675284 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:37:03.285276  675284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:37:03.285339  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:37:03.311760  675284 cri.go:89] found id: ""
	I1206 11:37:03.311783  675284 logs.go:282] 0 containers: []
	W1206 11:37:03.311791  675284 logs.go:284] No container was found matching "kindnet"
	I1206 11:37:03.311798  675284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:37:03.311860  675284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:37:03.337843  675284 cri.go:89] found id: ""
	I1206 11:37:03.337934  675284 logs.go:282] 0 containers: []
	W1206 11:37:03.337960  675284 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:37:03.337984  675284 logs.go:123] Gathering logs for kubelet ...
	I1206 11:37:03.338026  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:37:03.411972  675284 logs.go:123] Gathering logs for dmesg ...
	I1206 11:37:03.412009  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:37:03.428553  675284 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:37:03.428582  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:37:03.520037  675284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:37:03.520059  675284 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:37:03.520071  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:37:03.553157  675284 logs.go:123] Gathering logs for container status ...
	I1206 11:37:03.553196  675284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 11:37:03.581339  675284 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000114184s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 11:37:03.581438  675284 out.go:285] * 
	W1206 11:37:03.581613  675284 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000114184s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:37:03.581633  675284 out.go:285] * 
	W1206 11:37:03.584726  675284 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:37:03.591337  675284 out.go:203] 
	W1206 11:37:03.594308  675284 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000114184s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:37:03.594365  675284 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 11:37:03.594385  675284 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 11:37:03.597633  675284 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 11:24:48 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:24:48.572187518Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 06 11:24:48 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:24:48.57222371Z" level=info msg="Starting seccomp notifier watcher"
	Dec 06 11:24:48 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:24:48.572265974Z" level=info msg="Create NRI interface"
	Dec 06 11:24:48 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:24:48.572372819Z" level=info msg="built-in NRI default validator is disabled"
	Dec 06 11:24:48 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:24:48.57238077Z" level=info msg="runtime interface created"
	Dec 06 11:24:48 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:24:48.572393028Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 06 11:24:48 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:24:48.572399584Z" level=info msg="runtime interface starting up..."
	Dec 06 11:24:48 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:24:48.572405278Z" level=info msg="starting plugins..."
	Dec 06 11:24:48 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:24:48.572417512Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 11:24:48 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:24:48.572483094Z" level=info msg="No systemd watchdog enabled"
	Dec 06 11:24:48 kubernetes-upgrade-888189 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 06 11:28:59 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:28:59.358784122Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=a83681db-0c49-419b-bca8-0b08dd11c29a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:28:59 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:28:59.372979777Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=9553a929-c30b-4f9b-8cf8-e9f42b3a6406 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:28:59 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:28:59.373537304Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=bc4e38af-cbea-49d1-89ba-0fb5b4d6a3b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:28:59 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:28:59.374108829Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=db979bd1-ae8c-4d07-9652-4e7af6c475b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:28:59 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:28:59.37458598Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=2173b120-8bea-44d8-baed-594196d4f51b name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:28:59 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:28:59.375090913Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=5a79156a-e78f-4ad4-b4e4-aee284213d8a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:28:59 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:28:59.375642541Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=55942cf0-676d-4d14-b04e-eccb29f26e97 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:33:01 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:33:01.600971351Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=14b846ed-8500-4679-926b-d1ede28d4c58 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:33:01 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:33:01.601562075Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=3b9866c7-8885-4c0f-b730-903571f14230 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:33:01 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:33:01.601994148Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=21bbab8f-a61b-4fcc-9085-0114231b32eb name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:33:01 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:33:01.602385451Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=d4f98a27-bf2e-44ba-b290-9bbeeb9ce14c name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:33:01 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:33:01.602767785Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=a387ce24-8373-4c7f-850e-976b61d659de name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:33:01 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:33:01.603196273Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=21e626e1-42c4-410c-adff-b6b7d7e40def name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:33:01 kubernetes-upgrade-888189 crio[613]: time="2025-12-06T11:33:01.603584441Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=7c7f1973-e521-4ca2-9d2a-82fc0cb0f5d9 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +3.921462] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:01] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:02] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:03] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:08] overlayfs: idmapped layers are currently not supported
	[ +32.041559] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:11] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:12] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:13] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:14] overlayfs: idmapped layers are currently not supported
	[  +0.520412] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:15] overlayfs: idmapped layers are currently not supported
	[ +26.850323] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:16] overlayfs: idmapped layers are currently not supported
	[ +26.214447] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:17] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:19] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:21] overlayfs: idmapped layers are currently not supported
	[  +0.844232] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:22] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:23] overlayfs: idmapped layers are currently not supported
	[  +2.926881] overlayfs: idmapped layers are currently not supported
	[ +38.119955] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:24] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 11:37:05 up  4:19,  0 user,  load average: 0.73, 1.51, 1.98
	Linux kubernetes-upgrade-888189 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:37:02 kubernetes-upgrade-888189 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:37:03 kubernetes-upgrade-888189 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 06 11:37:03 kubernetes-upgrade-888189 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:37:03 kubernetes-upgrade-888189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:37:03 kubernetes-upgrade-888189 kubelet[12048]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:37:03 kubernetes-upgrade-888189 kubelet[12048]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:37:03 kubernetes-upgrade-888189 kubelet[12048]: E1206 11:37:03.496399   12048 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:37:03 kubernetes-upgrade-888189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:37:03 kubernetes-upgrade-888189 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:37:04 kubernetes-upgrade-888189 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 06 11:37:04 kubernetes-upgrade-888189 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:37:04 kubernetes-upgrade-888189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:37:04 kubernetes-upgrade-888189 kubelet[12074]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:37:04 kubernetes-upgrade-888189 kubelet[12074]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:37:04 kubernetes-upgrade-888189 kubelet[12074]: E1206 11:37:04.242680   12074 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:37:04 kubernetes-upgrade-888189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:37:04 kubernetes-upgrade-888189 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:37:04 kubernetes-upgrade-888189 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 06 11:37:04 kubernetes-upgrade-888189 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:37:04 kubernetes-upgrade-888189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:37:05 kubernetes-upgrade-888189 kubelet[12132]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:37:05 kubernetes-upgrade-888189 kubelet[12132]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:37:05 kubernetes-upgrade-888189 kubelet[12132]: E1206 11:37:05.045963   12132 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:37:05 kubernetes-upgrade-888189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:37:05 kubernetes-upgrade-888189 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-888189 -n kubernetes-upgrade-888189
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-888189 -n kubernetes-upgrade-888189: exit status 2 (498.640721ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-888189" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-888189" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-888189
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-888189: (2.155927504s)
--- FAIL: TestKubernetesUpgrade (779.88s)

                                                
                                    
x
+
TestPause/serial/Pause (8.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-362686 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-362686 --alsologtostderr -v=5: exit status 80 (2.399089055s)

                                                
                                                
-- stdout --
	* Pausing node pause-362686 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:22:37.916436  663535 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:22:37.921172  663535 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:22:37.921192  663535 out.go:374] Setting ErrFile to fd 2...
	I1206 11:22:37.921198  663535 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:22:37.921510  663535 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 11:22:37.921816  663535 out.go:368] Setting JSON to false
	I1206 11:22:37.921861  663535 mustload.go:66] Loading cluster: pause-362686
	I1206 11:22:37.923198  663535 config.go:182] Loaded profile config "pause-362686": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:22:37.923918  663535 cli_runner.go:164] Run: docker container inspect pause-362686 --format={{.State.Status}}
	I1206 11:22:37.959233  663535 host.go:66] Checking if "pause-362686" exists ...
	I1206 11:22:37.959562  663535 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:22:38.066252  663535 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-12-06 11:22:38.053216721 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:22:38.066936  663535 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-362686 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1206 11:22:38.070193  663535 out.go:179] * Pausing node pause-362686 ... 
	I1206 11:22:38.073088  663535 host.go:66] Checking if "pause-362686" exists ...
	I1206 11:22:38.073447  663535 ssh_runner.go:195] Run: systemctl --version
	I1206 11:22:38.073500  663535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-362686
	I1206 11:22:38.105459  663535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/pause-362686/id_rsa Username:docker}
	I1206 11:22:38.238130  663535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:22:38.268813  663535 pause.go:52] kubelet running: true
	I1206 11:22:38.268881  663535 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 11:22:38.566337  663535 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 11:22:38.566423  663535 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 11:22:38.720318  663535 cri.go:89] found id: "86668cc48342b6b06db4ce1d779a41d7855a18e4c0a057f86e1158cc5f6d5eda"
	I1206 11:22:38.720395  663535 cri.go:89] found id: "ebe67943652dca7cbe83a46bb2d819cac77f0ad4b644b12ca924ab6584a4bd63"
	I1206 11:22:38.720415  663535 cri.go:89] found id: "6e3cfbbf22515804d814e640a38a565801f45c2eb8911d6a2683c88b1e27721f"
	I1206 11:22:38.720431  663535 cri.go:89] found id: "a26bba6a17e6370c96bd7c1a6acad88e7c45321d94b62ab7fd35fd42563c6135"
	I1206 11:22:38.720451  663535 cri.go:89] found id: "ff260a67303ff3f7a8aa0797d085aa7948f99f8e8c90b67ee4407f01cd45e323"
	I1206 11:22:38.720484  663535 cri.go:89] found id: "c453d81cb3c615216b5765eff485bd7cf640ceb31d76bdc3dfd6a126ddd6e142"
	I1206 11:22:38.720500  663535 cri.go:89] found id: "0fd951199755384d101f360f2a37416ef2791debea5e34742392446869de4356"
	I1206 11:22:38.720517  663535 cri.go:89] found id: "6d1b063b72f9938ca522120a5fbd763acc547f1a23d25c7fdabad14c548f5751"
	I1206 11:22:38.720534  663535 cri.go:89] found id: "941d38d4fe915ca06d5a8cc2dd6e1239af193b6889d323f017ed16e115e81d35"
	I1206 11:22:38.720566  663535 cri.go:89] found id: "a44e62d267f8fee2c6800bbd3ace8990c75f30bbc3bb324584f31501e6d0b0e0"
	I1206 11:22:38.720589  663535 cri.go:89] found id: "edea99de7a79435a14ae5bb6a539e81bf5c38079dc33137b11444b62b1de8815"
	I1206 11:22:38.720606  663535 cri.go:89] found id: "a2bd67f169d223a769428c661c985dc250fa2eb1f1d2f69b7452ba14c1cdaaf4"
	I1206 11:22:38.720623  663535 cri.go:89] found id: "e5dcf878f0a2fc09413f380ae032038a9f6a343f47a1c3939bf59537afe75948"
	I1206 11:22:38.720651  663535 cri.go:89] found id: "a978c34bc129a8093c51a8672e8d1d3c8a66e7e93bc4096a4ed9b46a5133bf24"
	I1206 11:22:38.720667  663535 cri.go:89] found id: ""
	I1206 11:22:38.720737  663535 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 11:22:38.741007  663535 retry.go:31] will retry after 322.322327ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T11:22:38Z" level=error msg="open /run/runc: no such file or directory"
	I1206 11:22:39.064480  663535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:22:39.088873  663535 pause.go:52] kubelet running: false
	I1206 11:22:39.088989  663535 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 11:22:39.328590  663535 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 11:22:39.328724  663535 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 11:22:39.417920  663535 cri.go:89] found id: "86668cc48342b6b06db4ce1d779a41d7855a18e4c0a057f86e1158cc5f6d5eda"
	I1206 11:22:39.417943  663535 cri.go:89] found id: "ebe67943652dca7cbe83a46bb2d819cac77f0ad4b644b12ca924ab6584a4bd63"
	I1206 11:22:39.417948  663535 cri.go:89] found id: "6e3cfbbf22515804d814e640a38a565801f45c2eb8911d6a2683c88b1e27721f"
	I1206 11:22:39.417952  663535 cri.go:89] found id: "a26bba6a17e6370c96bd7c1a6acad88e7c45321d94b62ab7fd35fd42563c6135"
	I1206 11:22:39.417956  663535 cri.go:89] found id: "ff260a67303ff3f7a8aa0797d085aa7948f99f8e8c90b67ee4407f01cd45e323"
	I1206 11:22:39.417959  663535 cri.go:89] found id: "c453d81cb3c615216b5765eff485bd7cf640ceb31d76bdc3dfd6a126ddd6e142"
	I1206 11:22:39.417962  663535 cri.go:89] found id: "0fd951199755384d101f360f2a37416ef2791debea5e34742392446869de4356"
	I1206 11:22:39.417965  663535 cri.go:89] found id: "6d1b063b72f9938ca522120a5fbd763acc547f1a23d25c7fdabad14c548f5751"
	I1206 11:22:39.417967  663535 cri.go:89] found id: "941d38d4fe915ca06d5a8cc2dd6e1239af193b6889d323f017ed16e115e81d35"
	I1206 11:22:39.417975  663535 cri.go:89] found id: "a44e62d267f8fee2c6800bbd3ace8990c75f30bbc3bb324584f31501e6d0b0e0"
	I1206 11:22:39.417978  663535 cri.go:89] found id: "edea99de7a79435a14ae5bb6a539e81bf5c38079dc33137b11444b62b1de8815"
	I1206 11:22:39.417982  663535 cri.go:89] found id: "a2bd67f169d223a769428c661c985dc250fa2eb1f1d2f69b7452ba14c1cdaaf4"
	I1206 11:22:39.417985  663535 cri.go:89] found id: "e5dcf878f0a2fc09413f380ae032038a9f6a343f47a1c3939bf59537afe75948"
	I1206 11:22:39.417988  663535 cri.go:89] found id: "a978c34bc129a8093c51a8672e8d1d3c8a66e7e93bc4096a4ed9b46a5133bf24"
	I1206 11:22:39.417991  663535 cri.go:89] found id: ""
	I1206 11:22:39.418040  663535 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 11:22:39.430270  663535 retry.go:31] will retry after 346.474204ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T11:22:39Z" level=error msg="open /run/runc: no such file or directory"
	I1206 11:22:39.777827  663535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:22:39.792151  663535 pause.go:52] kubelet running: false
	I1206 11:22:39.792218  663535 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 11:22:40.034051  663535 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 11:22:40.034199  663535 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 11:22:40.180896  663535 cri.go:89] found id: "86668cc48342b6b06db4ce1d779a41d7855a18e4c0a057f86e1158cc5f6d5eda"
	I1206 11:22:40.180920  663535 cri.go:89] found id: "ebe67943652dca7cbe83a46bb2d819cac77f0ad4b644b12ca924ab6584a4bd63"
	I1206 11:22:40.180926  663535 cri.go:89] found id: "6e3cfbbf22515804d814e640a38a565801f45c2eb8911d6a2683c88b1e27721f"
	I1206 11:22:40.180930  663535 cri.go:89] found id: "a26bba6a17e6370c96bd7c1a6acad88e7c45321d94b62ab7fd35fd42563c6135"
	I1206 11:22:40.180932  663535 cri.go:89] found id: "ff260a67303ff3f7a8aa0797d085aa7948f99f8e8c90b67ee4407f01cd45e323"
	I1206 11:22:40.180937  663535 cri.go:89] found id: "c453d81cb3c615216b5765eff485bd7cf640ceb31d76bdc3dfd6a126ddd6e142"
	I1206 11:22:40.180940  663535 cri.go:89] found id: "0fd951199755384d101f360f2a37416ef2791debea5e34742392446869de4356"
	I1206 11:22:40.180943  663535 cri.go:89] found id: "6d1b063b72f9938ca522120a5fbd763acc547f1a23d25c7fdabad14c548f5751"
	I1206 11:22:40.180946  663535 cri.go:89] found id: "941d38d4fe915ca06d5a8cc2dd6e1239af193b6889d323f017ed16e115e81d35"
	I1206 11:22:40.180982  663535 cri.go:89] found id: "a44e62d267f8fee2c6800bbd3ace8990c75f30bbc3bb324584f31501e6d0b0e0"
	I1206 11:22:40.180987  663535 cri.go:89] found id: "edea99de7a79435a14ae5bb6a539e81bf5c38079dc33137b11444b62b1de8815"
	I1206 11:22:40.180990  663535 cri.go:89] found id: "a2bd67f169d223a769428c661c985dc250fa2eb1f1d2f69b7452ba14c1cdaaf4"
	I1206 11:22:40.180993  663535 cri.go:89] found id: "e5dcf878f0a2fc09413f380ae032038a9f6a343f47a1c3939bf59537afe75948"
	I1206 11:22:40.180999  663535 cri.go:89] found id: "a978c34bc129a8093c51a8672e8d1d3c8a66e7e93bc4096a4ed9b46a5133bf24"
	I1206 11:22:40.181009  663535 cri.go:89] found id: ""
	I1206 11:22:40.181092  663535 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 11:22:40.207053  663535 out.go:203] 
	W1206 11:22:40.210091  663535 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T11:22:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T11:22:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 11:22:40.210119  663535 out.go:285] * 
	* 
	W1206 11:22:40.217714  663535 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:22:40.220752  663535 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-362686 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-362686
helpers_test.go:243: (dbg) docker inspect pause-362686:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "483762e5302b9484ed625e935e766406424f4fdbc289b3cf0996bdcaa496d591",
	        "Created": "2025-12-06T11:20:47.934707258Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 651285,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T11:20:48.037042539Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/483762e5302b9484ed625e935e766406424f4fdbc289b3cf0996bdcaa496d591/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/483762e5302b9484ed625e935e766406424f4fdbc289b3cf0996bdcaa496d591/hostname",
	        "HostsPath": "/var/lib/docker/containers/483762e5302b9484ed625e935e766406424f4fdbc289b3cf0996bdcaa496d591/hosts",
	        "LogPath": "/var/lib/docker/containers/483762e5302b9484ed625e935e766406424f4fdbc289b3cf0996bdcaa496d591/483762e5302b9484ed625e935e766406424f4fdbc289b3cf0996bdcaa496d591-json.log",
	        "Name": "/pause-362686",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-362686:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-362686",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "483762e5302b9484ed625e935e766406424f4fdbc289b3cf0996bdcaa496d591",
	                "LowerDir": "/var/lib/docker/overlay2/bdc2df94df6791590749f9bba5012f4cfbddcdc0cfd44e4029a643cd93129568-init/diff:/var/lib/docker/overlay2/cc06c0f1f442a7275dc247974ca9074508813cfb842de89bc5bb1dae1e824222/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bdc2df94df6791590749f9bba5012f4cfbddcdc0cfd44e4029a643cd93129568/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bdc2df94df6791590749f9bba5012f4cfbddcdc0cfd44e4029a643cd93129568/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bdc2df94df6791590749f9bba5012f4cfbddcdc0cfd44e4029a643cd93129568/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-362686",
	                "Source": "/var/lib/docker/volumes/pause-362686/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-362686",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-362686",
	                "name.minikube.sigs.k8s.io": "pause-362686",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8a3d6ab14467a7b64e43c83e1d7257b1e0a38d0b69abafec8e8823a9e24510a8",
	            "SandboxKey": "/var/run/docker/netns/8a3d6ab14467",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33378"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33379"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33382"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33380"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33381"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-362686": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:94:71:85:ff:aa",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4c3e6f047c35c81af103705fe8c66684615511c36ae2d343dff3df867f73b991",
	                    "EndpointID": "faf39f31a962d562ea795f3cfb0e101d297ce06c3976bd692d942db03118f821",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-362686",
	                        "483762e5302b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-362686 -n pause-362686
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-362686 -n pause-362686: exit status 2 (468.137766ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-362686 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-362686 logs -n 25: (1.837630649s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-334090 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo systemctl status docker --all --full --no-pager                                      │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo systemctl cat docker --no-pager                                                      │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo cat /etc/docker/daemon.json                                                          │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo docker system info                                                                   │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo cri-dockerd --version                                                                │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo systemctl cat containerd --no-pager                                                  │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo cat /etc/containerd/config.toml                                                      │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo containerd config dump                                                               │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo systemctl status crio --all --full --no-pager                                        │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo systemctl cat crio --no-pager                                                        │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo crio config                                                                          │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ delete  │ -p cilium-334090                                                                                           │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │ 06 Dec 25 11:22 UTC │
	│ start   │ -p force-systemd-env-163342 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-163342 │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ start   │ -p pause-362686 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                           │ pause-362686             │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │ 06 Dec 25 11:22 UTC │
	│ pause   │ -p pause-362686 --alsologtostderr -v=5                                                                     │ pause-362686             │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 11:22:09
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 11:22:09.713954  660500 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:22:09.714183  660500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:22:09.714205  660500 out.go:374] Setting ErrFile to fd 2...
	I1206 11:22:09.714225  660500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:22:09.714494  660500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 11:22:09.714890  660500 out.go:368] Setting JSON to false
	I1206 11:22:09.715838  660500 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":14681,"bootTime":1765005449,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 11:22:09.715934  660500 start.go:143] virtualization:  
	I1206 11:22:09.720606  660500 out.go:179] * [pause-362686] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:22:09.723627  660500 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 11:22:09.723720  660500 notify.go:221] Checking for updates...
	I1206 11:22:09.729271  660500 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:22:09.611414  660445 start.go:309] selected driver: docker
	I1206 11:22:09.611442  660445 start.go:927] validating driver "docker" against <nil>
	I1206 11:22:09.611458  660445 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:22:09.612221  660445 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:22:09.731876  660445 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:22:09.713518381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:22:09.732024  660445 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 11:22:09.732245  660445 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 11:22:09.735256  660445 out.go:179] * Using Docker driver with root privileges
	I1206 11:22:09.738113  660445 cni.go:84] Creating CNI manager for ""
	I1206 11:22:09.738183  660445 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 11:22:09.738197  660445 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 11:22:09.738289  660445 start.go:353] cluster config:
	{Name:force-systemd-env-163342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-163342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:22:09.738572  660500 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 11:22:09.741368  660445 out.go:179] * Starting "force-systemd-env-163342" primary control-plane node in "force-systemd-env-163342" cluster
	I1206 11:22:09.744198  660500 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 11:22:09.744200  660445 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 11:22:09.747118  660445 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:22:09.750629  660500 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:22:09.753867  660500 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:22:09.750163  660445 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 11:22:09.750210  660445 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1206 11:22:09.750222  660445 cache.go:65] Caching tarball of preloaded images
	I1206 11:22:09.750244  660445 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:22:09.750305  660445 preload.go:238] Found /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 11:22:09.750316  660445 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 11:22:09.750428  660445 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/config.json ...
	I1206 11:22:09.750450  660445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/config.json: {Name:mk9877e03bc9487c7b21a100fcf71b755d22f891 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:09.786759  660445 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:22:09.786779  660445 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:22:09.786798  660445 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:22:09.786831  660445 start.go:360] acquireMachinesLock for force-systemd-env-163342: {Name:mk95995c71845cbdbf4b572cde1795c098f3a698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:22:09.787050  660445 start.go:364] duration metric: took 202.516µs to acquireMachinesLock for "force-systemd-env-163342"
	I1206 11:22:09.787082  660445 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-163342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-163342 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 11:22:09.787193  660445 start.go:125] createHost starting for "" (driver="docker")
	I1206 11:22:09.757296  660500 config.go:182] Loaded profile config "pause-362686": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:22:09.758272  660500 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:22:09.798892  660500 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:22:09.799020  660500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:22:09.913491  660500 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:56 SystemTime:2025-12-06 11:22:09.90266796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:22:09.913597  660500 docker.go:319] overlay module found
	I1206 11:22:09.917074  660500 out.go:179] * Using the docker driver based on existing profile
	I1206 11:22:09.919986  660500 start.go:309] selected driver: docker
	I1206 11:22:09.920006  660500 start.go:927] validating driver "docker" against &{Name:pause-362686 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-362686 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:22:09.920213  660500 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:22:09.920316  660500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:22:09.996254  660500 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-06 11:22:09.986398254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:22:09.996641  660500 cni.go:84] Creating CNI manager for ""
	I1206 11:22:09.996686  660500 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 11:22:09.996728  660500 start.go:353] cluster config:
	{Name:pause-362686 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-362686 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:22:10.000714  660500 out.go:179] * Starting "pause-362686" primary control-plane node in "pause-362686" cluster
	I1206 11:22:10.004992  660500 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 11:22:10.011040  660500 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:22:10.013955  660500 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 11:22:10.014013  660500 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1206 11:22:10.014026  660500 cache.go:65] Caching tarball of preloaded images
	I1206 11:22:10.014121  660500 preload.go:238] Found /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 11:22:10.014132  660500 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 11:22:10.014283  660500 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/config.json ...
	I1206 11:22:10.014550  660500 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:22:10.047598  660500 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:22:10.047624  660500 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:22:10.047873  660500 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:22:10.047922  660500 start.go:360] acquireMachinesLock for pause-362686: {Name:mkc3fbfa0390357cdd29a7741a7c1c2215c4924f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:22:10.047994  660500 start.go:364] duration metric: took 43.093µs to acquireMachinesLock for "pause-362686"
	I1206 11:22:10.048014  660500 start.go:96] Skipping create...Using existing machine configuration
	I1206 11:22:10.048019  660500 fix.go:54] fixHost starting: 
	I1206 11:22:10.048299  660500 cli_runner.go:164] Run: docker container inspect pause-362686 --format={{.State.Status}}
	I1206 11:22:10.074133  660500 fix.go:112] recreateIfNeeded on pause-362686: state=Running err=<nil>
	W1206 11:22:10.074162  660500 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 11:22:09.794086  660445 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 11:22:09.794338  660445 start.go:159] libmachine.API.Create for "force-systemd-env-163342" (driver="docker")
	I1206 11:22:09.794370  660445 client.go:173] LocalClient.Create starting
	I1206 11:22:09.794424  660445 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem
	I1206 11:22:09.794455  660445 main.go:143] libmachine: Decoding PEM data...
	I1206 11:22:09.794469  660445 main.go:143] libmachine: Parsing certificate...
	I1206 11:22:09.794521  660445 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem
	I1206 11:22:09.794545  660445 main.go:143] libmachine: Decoding PEM data...
	I1206 11:22:09.794556  660445 main.go:143] libmachine: Parsing certificate...
	I1206 11:22:09.794917  660445 cli_runner.go:164] Run: docker network inspect force-systemd-env-163342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 11:22:09.819289  660445 cli_runner.go:211] docker network inspect force-systemd-env-163342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 11:22:09.819363  660445 network_create.go:284] running [docker network inspect force-systemd-env-163342] to gather additional debugging logs...
	I1206 11:22:09.819382  660445 cli_runner.go:164] Run: docker network inspect force-systemd-env-163342
	W1206 11:22:09.845165  660445 cli_runner.go:211] docker network inspect force-systemd-env-163342 returned with exit code 1
	I1206 11:22:09.845195  660445 network_create.go:287] error running [docker network inspect force-systemd-env-163342]: docker network inspect force-systemd-env-163342: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-163342 not found
	I1206 11:22:09.845210  660445 network_create.go:289] output of [docker network inspect force-systemd-env-163342]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-163342 not found
	
	** /stderr **
	I1206 11:22:09.845385  660445 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:22:09.862928  660445 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-194638dca10b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:a6:03:7b:5f:e6} reservation:<nil>}
	I1206 11:22:09.863272  660445 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f3d8d6011d33 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:f2:c6:89:02:f2} reservation:<nil>}
	I1206 11:22:09.863551  660445 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b83707b00b77 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:03:cb:6f:a3:46} reservation:<nil>}
	I1206 11:22:09.863840  660445 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4c3e6f047c35 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:8e:9b:68:4a:8c:06} reservation:<nil>}
	I1206 11:22:09.864258  660445 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e5080}
	I1206 11:22:09.864276  660445 network_create.go:124] attempt to create docker network force-systemd-env-163342 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 11:22:09.864329  660445 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-163342 force-systemd-env-163342
	I1206 11:22:09.941534  660445 network_create.go:108] docker network force-systemd-env-163342 192.168.85.0/24 created
	I1206 11:22:09.941568  660445 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-163342" container
	I1206 11:22:09.941653  660445 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 11:22:09.978817  660445 cli_runner.go:164] Run: docker volume create force-systemd-env-163342 --label name.minikube.sigs.k8s.io=force-systemd-env-163342 --label created_by.minikube.sigs.k8s.io=true
	I1206 11:22:10.012601  660445 oci.go:103] Successfully created a docker volume force-systemd-env-163342
	I1206 11:22:10.012812  660445 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-163342-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-163342 --entrypoint /usr/bin/test -v force-systemd-env-163342:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 11:22:10.670441  660445 oci.go:107] Successfully prepared a docker volume force-systemd-env-163342
	I1206 11:22:10.670515  660445 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 11:22:10.670525  660445 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 11:22:10.670589  660445 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-163342:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 11:22:14.193209  660445 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-163342:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.522572863s)
	I1206 11:22:14.193262  660445 kic.go:203] duration metric: took 3.522733303s to extract preloaded images to volume ...
	W1206 11:22:14.193418  660445 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 11:22:14.193540  660445 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 11:22:14.254490  660445 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-163342 --name force-systemd-env-163342 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-163342 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-163342 --network force-systemd-env-163342 --ip 192.168.85.2 --volume force-systemd-env-163342:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 11:22:10.079450  660500 out.go:252] * Updating the running docker "pause-362686" container ...
	I1206 11:22:10.079493  660500 machine.go:94] provisionDockerMachine start ...
	I1206 11:22:10.079578  660500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-362686
	I1206 11:22:10.100860  660500 main.go:143] libmachine: Using SSH client type: native
	I1206 11:22:10.107554  660500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33378 <nil> <nil>}
	I1206 11:22:10.107589  660500 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:22:10.299880  660500 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-362686
	
	I1206 11:22:10.299959  660500 ubuntu.go:182] provisioning hostname "pause-362686"
	I1206 11:22:10.300051  660500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-362686
	I1206 11:22:10.324940  660500 main.go:143] libmachine: Using SSH client type: native
	I1206 11:22:10.325364  660500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33378 <nil> <nil>}
	I1206 11:22:10.325377  660500 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-362686 && echo "pause-362686" | sudo tee /etc/hostname
	I1206 11:22:10.565160  660500 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-362686
	
	I1206 11:22:10.565286  660500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-362686
	I1206 11:22:10.587773  660500 main.go:143] libmachine: Using SSH client type: native
	I1206 11:22:10.588085  660500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33378 <nil> <nil>}
	I1206 11:22:10.588109  660500 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-362686' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-362686/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-362686' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:22:10.776538  660500 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:22:10.776567  660500 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-484819/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-484819/.minikube}
	I1206 11:22:10.776589  660500 ubuntu.go:190] setting up certificates
	I1206 11:22:10.776597  660500 provision.go:84] configureAuth start
	I1206 11:22:10.776657  660500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-362686
	I1206 11:22:10.807285  660500 provision.go:143] copyHostCerts
	I1206 11:22:10.807354  660500 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem, removing ...
	I1206 11:22:10.807374  660500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 11:22:10.807452  660500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem (1082 bytes)
	I1206 11:22:10.807561  660500 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem, removing ...
	I1206 11:22:10.807571  660500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 11:22:10.807599  660500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem (1123 bytes)
	I1206 11:22:10.807655  660500 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem, removing ...
	I1206 11:22:10.807670  660500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 11:22:10.807696  660500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem (1675 bytes)
	I1206 11:22:10.807754  660500 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem org=jenkins.pause-362686 san=[127.0.0.1 192.168.76.2 localhost minikube pause-362686]
	I1206 11:22:10.966620  660500 provision.go:177] copyRemoteCerts
	I1206 11:22:10.966740  660500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:22:10.966815  660500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-362686
	I1206 11:22:10.986201  660500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/pause-362686/id_rsa Username:docker}
	I1206 11:22:11.093147  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:22:11.117156  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1206 11:22:11.142785  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 11:22:11.166858  660500 provision.go:87] duration metric: took 390.23099ms to configureAuth
	I1206 11:22:11.166961  660500 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:22:11.167370  660500 config.go:182] Loaded profile config "pause-362686": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:22:11.167637  660500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-362686
	I1206 11:22:11.196859  660500 main.go:143] libmachine: Using SSH client type: native
	I1206 11:22:11.197206  660500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33378 <nil> <nil>}
	I1206 11:22:11.197236  660500 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 11:22:16.573645  660500 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 11:22:16.573669  660500 machine.go:97] duration metric: took 6.494167045s to provisionDockerMachine
	I1206 11:22:16.573702  660500 start.go:293] postStartSetup for "pause-362686" (driver="docker")
	I1206 11:22:16.573716  660500 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:22:16.573788  660500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:22:16.573847  660500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-362686
	I1206 11:22:16.592871  660500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/pause-362686/id_rsa Username:docker}
	I1206 11:22:16.698887  660500 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:22:16.702470  660500 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:22:16.702498  660500 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:22:16.702510  660500 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/addons for local assets ...
	I1206 11:22:16.702562  660500 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/files for local assets ...
	I1206 11:22:16.702648  660500 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> 4880682.pem in /etc/ssl/certs
	I1206 11:22:16.702760  660500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:22:16.710147  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 11:22:16.727241  660500 start.go:296] duration metric: took 153.51936ms for postStartSetup
	I1206 11:22:16.727322  660500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:22:16.727360  660500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-362686
	I1206 11:22:16.745204  660500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/pause-362686/id_rsa Username:docker}
	I1206 11:22:16.848945  660500 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:22:16.856109  660500 fix.go:56] duration metric: took 6.808080579s for fixHost
	I1206 11:22:16.856136  660500 start.go:83] releasing machines lock for "pause-362686", held for 6.808133289s
	I1206 11:22:16.856215  660500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-362686
	I1206 11:22:16.878911  660500 ssh_runner.go:195] Run: cat /version.json
	I1206 11:22:16.878968  660500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-362686
	I1206 11:22:16.878914  660500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:22:16.879282  660500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-362686
	I1206 11:22:16.904612  660500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/pause-362686/id_rsa Username:docker}
	I1206 11:22:16.909037  660500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/pause-362686/id_rsa Username:docker}
	I1206 11:22:17.097488  660500 ssh_runner.go:195] Run: systemctl --version
	I1206 11:22:17.104036  660500 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 11:22:17.144180  660500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:22:17.148801  660500 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:22:17.148894  660500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:22:17.156807  660500 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 11:22:17.156881  660500 start.go:496] detecting cgroup driver to use...
	I1206 11:22:17.156940  660500 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:22:17.157000  660500 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 11:22:17.172168  660500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 11:22:17.185619  660500 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:22:17.185752  660500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:22:17.201826  660500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:22:17.215702  660500 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:22:17.357942  660500 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:22:17.507061  660500 docker.go:234] disabling docker service ...
	I1206 11:22:17.507215  660500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:22:17.523514  660500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:22:17.537233  660500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:22:17.669832  660500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:22:17.808704  660500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:22:17.822618  660500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:22:17.837042  660500 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 11:22:17.837124  660500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:17.846457  660500 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 11:22:17.846527  660500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:17.855651  660500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:17.865265  660500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:17.874983  660500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:22:17.883356  660500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:17.893206  660500 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:17.902898  660500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:17.912908  660500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:22:17.920731  660500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:22:17.928225  660500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:22:18.077961  660500 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 11:22:18.333431  660500 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 11:22:18.333497  660500 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 11:22:18.337479  660500 start.go:564] Will wait 60s for crictl version
	I1206 11:22:18.337539  660500 ssh_runner.go:195] Run: which crictl
	I1206 11:22:18.342311  660500 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:22:18.376401  660500 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 11:22:18.376492  660500 ssh_runner.go:195] Run: crio --version
	I1206 11:22:18.411871  660500 ssh_runner.go:195] Run: crio --version
	I1206 11:22:18.468068  660500 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 11:22:14.546711  660445 cli_runner.go:164] Run: docker container inspect force-systemd-env-163342 --format={{.State.Running}}
	I1206 11:22:14.569427  660445 cli_runner.go:164] Run: docker container inspect force-systemd-env-163342 --format={{.State.Status}}
	I1206 11:22:14.594607  660445 cli_runner.go:164] Run: docker exec force-systemd-env-163342 stat /var/lib/dpkg/alternatives/iptables
	I1206 11:22:14.646152  660445 oci.go:144] the created container "force-systemd-env-163342" has a running status.
	I1206 11:22:14.646180  660445 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/force-systemd-env-163342/id_rsa...
	I1206 11:22:14.842448  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/force-systemd-env-163342/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1206 11:22:14.842507  660445 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-484819/.minikube/machines/force-systemd-env-163342/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 11:22:14.872736  660445 cli_runner.go:164] Run: docker container inspect force-systemd-env-163342 --format={{.State.Status}}
	I1206 11:22:14.905004  660445 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 11:22:14.905029  660445 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-163342 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 11:22:14.962727  660445 cli_runner.go:164] Run: docker container inspect force-systemd-env-163342 --format={{.State.Status}}
	I1206 11:22:14.987384  660445 machine.go:94] provisionDockerMachine start ...
	I1206 11:22:14.987489  660445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-163342
	I1206 11:22:15.014944  660445 main.go:143] libmachine: Using SSH client type: native
	I1206 11:22:15.015426  660445 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1206 11:22:15.015440  660445 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:22:15.016294  660445 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 11:22:18.174973  660445 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-163342
	
	I1206 11:22:18.175041  660445 ubuntu.go:182] provisioning hostname "force-systemd-env-163342"
	I1206 11:22:18.175154  660445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-163342
	I1206 11:22:18.199344  660445 main.go:143] libmachine: Using SSH client type: native
	I1206 11:22:18.199742  660445 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1206 11:22:18.199761  660445 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-163342 && echo "force-systemd-env-163342" | sudo tee /etc/hostname
	I1206 11:22:18.365153  660445 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-163342
	
	I1206 11:22:18.365279  660445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-163342
	I1206 11:22:18.384736  660445 main.go:143] libmachine: Using SSH client type: native
	I1206 11:22:18.385049  660445 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1206 11:22:18.385071  660445 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-163342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-163342/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-163342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:22:18.547852  660445 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:22:18.547875  660445 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-484819/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-484819/.minikube}
	I1206 11:22:18.547896  660445 ubuntu.go:190] setting up certificates
	I1206 11:22:18.547905  660445 provision.go:84] configureAuth start
	I1206 11:22:18.547971  660445 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-163342
	I1206 11:22:18.577171  660445 provision.go:143] copyHostCerts
	I1206 11:22:18.577225  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 11:22:18.577259  660445 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem, removing ...
	I1206 11:22:18.577266  660445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 11:22:18.577348  660445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem (1082 bytes)
	I1206 11:22:18.577435  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 11:22:18.577453  660445 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem, removing ...
	I1206 11:22:18.577458  660445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 11:22:18.577483  660445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem (1123 bytes)
	I1206 11:22:18.577530  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 11:22:18.577551  660445 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem, removing ...
	I1206 11:22:18.577555  660445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 11:22:18.577578  660445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem (1675 bytes)
	I1206 11:22:18.577632  660445 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-163342 san=[127.0.0.1 192.168.85.2 force-systemd-env-163342 localhost minikube]
	I1206 11:22:18.911768  660445 provision.go:177] copyRemoteCerts
	I1206 11:22:18.911876  660445 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:22:18.911934  660445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-163342
	I1206 11:22:18.938444  660445 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/force-systemd-env-163342/id_rsa Username:docker}
	I1206 11:22:19.048874  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 11:22:19.048932  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:22:19.071115  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 11:22:19.071242  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1206 11:22:19.103494  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 11:22:19.103550  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 11:22:19.125523  660445 provision.go:87] duration metric: took 577.596471ms to configureAuth
	I1206 11:22:19.125567  660445 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:22:19.125748  660445 config.go:182] Loaded profile config "force-systemd-env-163342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:22:19.125861  660445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-163342
	I1206 11:22:19.145717  660445 main.go:143] libmachine: Using SSH client type: native
	I1206 11:22:19.146035  660445 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1206 11:22:19.146053  660445 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 11:22:18.471186  660500 cli_runner.go:164] Run: docker network inspect pause-362686 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:22:18.488382  660500 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1206 11:22:18.492676  660500 kubeadm.go:884] updating cluster {Name:pause-362686 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-362686 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:22:18.492815  660500 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 11:22:18.492865  660500 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:22:18.531606  660500 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 11:22:18.531630  660500 crio.go:433] Images already preloaded, skipping extraction
	I1206 11:22:18.531695  660500 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:22:18.570333  660500 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 11:22:18.570362  660500 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:22:18.570371  660500 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1206 11:22:18.570470  660500 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-362686 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-362686 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:22:18.570563  660500 ssh_runner.go:195] Run: crio config
	I1206 11:22:18.659080  660500 cni.go:84] Creating CNI manager for ""
	I1206 11:22:18.659178  660500 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 11:22:18.659243  660500 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 11:22:18.659290  660500 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-362686 NodeName:pause-362686 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:22:18.659439  660500 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-362686"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:22:18.659557  660500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 11:22:18.667898  660500 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:22:18.668021  660500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:22:18.675827  660500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1206 11:22:18.693615  660500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 11:22:18.707963  660500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1206 11:22:18.724726  660500 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:22:18.728923  660500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:22:18.892175  660500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:22:18.907441  660500 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686 for IP: 192.168.76.2
	I1206 11:22:18.907459  660500 certs.go:195] generating shared ca certs ...
	I1206 11:22:18.907479  660500 certs.go:227] acquiring lock for ca certs: {Name:mk654f77abd8383620ce6ddae56f2a6a8c1d96d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:18.907605  660500 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key
	I1206 11:22:18.907647  660500 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key
	I1206 11:22:18.907655  660500 certs.go:257] generating profile certs ...
	I1206 11:22:18.907737  660500 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/client.key
	I1206 11:22:18.907802  660500 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/apiserver.key.d90920fb
	I1206 11:22:18.907847  660500 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/proxy-client.key
	I1206 11:22:18.907954  660500 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem (1338 bytes)
	W1206 11:22:18.907983  660500 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068_empty.pem, impossibly tiny 0 bytes
	I1206 11:22:18.907991  660500 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 11:22:18.908021  660500 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:22:18.908050  660500 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:22:18.908077  660500 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem (1675 bytes)
	I1206 11:22:18.908124  660500 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 11:22:18.908701  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:22:18.928076  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:22:18.958817  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:22:18.981756  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 11:22:19.004320  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 11:22:19.025824  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 11:22:19.048382  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:22:19.070810  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 11:22:19.095908  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:22:19.119510  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem --> /usr/share/ca-certificates/488068.pem (1338 bytes)
	I1206 11:22:19.146661  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /usr/share/ca-certificates/4880682.pem (1708 bytes)
	I1206 11:22:19.170674  660500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:22:19.187209  660500 ssh_runner.go:195] Run: openssl version
	I1206 11:22:19.193663  660500 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:22:19.205616  660500 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:22:19.224961  660500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:22:19.233376  660500 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:22:19.233457  660500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:22:19.278227  660500 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:22:19.286356  660500 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488068.pem
	I1206 11:22:19.294282  660500 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488068.pem /etc/ssl/certs/488068.pem
	I1206 11:22:19.302083  660500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488068.pem
	I1206 11:22:19.305945  660500 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 11:22:19.306018  660500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488068.pem
	I1206 11:22:19.361843  660500 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:22:19.369362  660500 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4880682.pem
	I1206 11:22:19.380225  660500 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4880682.pem /etc/ssl/certs/4880682.pem
	I1206 11:22:19.388977  660500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4880682.pem
	I1206 11:22:19.394105  660500 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 11:22:19.394180  660500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4880682.pem
	I1206 11:22:19.457994  660500 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:22:19.480476  660500 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:22:19.489683  660500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 11:22:19.586688  660500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 11:22:19.739401  660500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 11:22:19.863483  660500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 11:22:19.984657  660500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 11:22:20.069889  660500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 11:22:20.139959  660500 kubeadm.go:401] StartCluster: {Name:pause-362686 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-362686 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:22:20.140094  660500 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 11:22:20.140155  660500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:22:20.186882  660500 cri.go:89] found id: "86668cc48342b6b06db4ce1d779a41d7855a18e4c0a057f86e1158cc5f6d5eda"
	I1206 11:22:20.186910  660500 cri.go:89] found id: "ebe67943652dca7cbe83a46bb2d819cac77f0ad4b644b12ca924ab6584a4bd63"
	I1206 11:22:20.186915  660500 cri.go:89] found id: "6e3cfbbf22515804d814e640a38a565801f45c2eb8911d6a2683c88b1e27721f"
	I1206 11:22:20.186919  660500 cri.go:89] found id: "a26bba6a17e6370c96bd7c1a6acad88e7c45321d94b62ab7fd35fd42563c6135"
	I1206 11:22:20.186922  660500 cri.go:89] found id: "ff260a67303ff3f7a8aa0797d085aa7948f99f8e8c90b67ee4407f01cd45e323"
	I1206 11:22:20.186926  660500 cri.go:89] found id: "c453d81cb3c615216b5765eff485bd7cf640ceb31d76bdc3dfd6a126ddd6e142"
	I1206 11:22:20.186929  660500 cri.go:89] found id: "0fd951199755384d101f360f2a37416ef2791debea5e34742392446869de4356"
	I1206 11:22:20.186932  660500 cri.go:89] found id: "6d1b063b72f9938ca522120a5fbd763acc547f1a23d25c7fdabad14c548f5751"
	I1206 11:22:20.186936  660500 cri.go:89] found id: "941d38d4fe915ca06d5a8cc2dd6e1239af193b6889d323f017ed16e115e81d35"
	I1206 11:22:20.186943  660500 cri.go:89] found id: "a44e62d267f8fee2c6800bbd3ace8990c75f30bbc3bb324584f31501e6d0b0e0"
	I1206 11:22:20.186946  660500 cri.go:89] found id: "edea99de7a79435a14ae5bb6a539e81bf5c38079dc33137b11444b62b1de8815"
	I1206 11:22:20.186949  660500 cri.go:89] found id: "a2bd67f169d223a769428c661c985dc250fa2eb1f1d2f69b7452ba14c1cdaaf4"
	I1206 11:22:20.186953  660500 cri.go:89] found id: "e5dcf878f0a2fc09413f380ae032038a9f6a343f47a1c3939bf59537afe75948"
	I1206 11:22:20.186956  660500 cri.go:89] found id: "a978c34bc129a8093c51a8672e8d1d3c8a66e7e93bc4096a4ed9b46a5133bf24"
	I1206 11:22:20.186959  660500 cri.go:89] found id: ""
	I1206 11:22:20.187006  660500 ssh_runner.go:195] Run: sudo runc list -f json
	W1206 11:22:20.207676  660500 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T11:22:20Z" level=error msg="open /run/runc: no such file or directory"
	I1206 11:22:20.207761  660500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:22:20.240156  660500 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 11:22:20.240180  660500 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 11:22:20.240245  660500 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 11:22:20.263634  660500 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:22:20.264275  660500 kubeconfig.go:125] found "pause-362686" server: "https://192.168.76.2:8443"
	I1206 11:22:20.264833  660500 kapi.go:59] client config for pause-362686: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/client.key", CAFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 11:22:20.265344  660500 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 11:22:20.265365  660500 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 11:22:20.265371  660500 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 11:22:20.265381  660500 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 11:22:20.265392  660500 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 11:22:20.265637  660500 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 11:22:20.289509  660500 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1206 11:22:20.289546  660500 kubeadm.go:602] duration metric: took 49.358622ms to restartPrimaryControlPlane
	I1206 11:22:20.289556  660500 kubeadm.go:403] duration metric: took 149.607989ms to StartCluster
	I1206 11:22:20.289580  660500 settings.go:142] acquiring lock: {Name:mk7eec112652eae38dac4afce804445d9092bd29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:20.289640  660500 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 11:22:20.290277  660500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/kubeconfig: {Name:mk884a72161ed5cd0cfdbffc4a21f277282d705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:20.290497  660500 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 11:22:20.290831  660500 config.go:182] Loaded profile config "pause-362686": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:22:20.290878  660500 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 11:22:20.295656  660500 out.go:179] * Verifying Kubernetes components...
	I1206 11:22:20.295758  660500 out.go:179] * Enabled addons: 
	I1206 11:22:19.499535  660445 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 11:22:19.499611  660445 machine.go:97] duration metric: took 4.512202287s to provisionDockerMachine
	I1206 11:22:19.499637  660445 client.go:176] duration metric: took 9.70526029s to LocalClient.Create
	I1206 11:22:19.499685  660445 start.go:167] duration metric: took 9.705347205s to libmachine.API.Create "force-systemd-env-163342"
	I1206 11:22:19.499716  660445 start.go:293] postStartSetup for "force-systemd-env-163342" (driver="docker")
	I1206 11:22:19.499742  660445 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:22:19.499841  660445 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:22:19.499904  660445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-163342
	I1206 11:22:19.532950  660445 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/force-systemd-env-163342/id_rsa Username:docker}
	I1206 11:22:19.652591  660445 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:22:19.659573  660445 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:22:19.659603  660445 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:22:19.659614  660445 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/addons for local assets ...
	I1206 11:22:19.659665  660445 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/files for local assets ...
	I1206 11:22:19.659747  660445 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> 4880682.pem in /etc/ssl/certs
	I1206 11:22:19.659754  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> /etc/ssl/certs/4880682.pem
	I1206 11:22:19.659849  660445 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:22:19.673751  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 11:22:19.706639  660445 start.go:296] duration metric: took 206.894773ms for postStartSetup
	I1206 11:22:19.707053  660445 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-163342
	I1206 11:22:19.726131  660445 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/config.json ...
	I1206 11:22:19.726399  660445 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:22:19.726446  660445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-163342
	I1206 11:22:19.760055  660445 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/force-systemd-env-163342/id_rsa Username:docker}
	I1206 11:22:19.882898  660445 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:22:19.890659  660445 start.go:128] duration metric: took 10.103451479s to createHost
	I1206 11:22:19.890682  660445 start.go:83] releasing machines lock for "force-systemd-env-163342", held for 10.103619722s
	I1206 11:22:19.890752  660445 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-163342
	I1206 11:22:19.914827  660445 ssh_runner.go:195] Run: cat /version.json
	I1206 11:22:19.914878  660445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-163342
	I1206 11:22:19.915105  660445 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:22:19.915198  660445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-163342
	I1206 11:22:19.941652  660445 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/force-systemd-env-163342/id_rsa Username:docker}
	I1206 11:22:19.955816  660445 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/force-systemd-env-163342/id_rsa Username:docker}
	I1206 11:22:20.070453  660445 ssh_runner.go:195] Run: systemctl --version
	I1206 11:22:20.193429  660445 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 11:22:20.272374  660445 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:22:20.281474  660445 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:22:20.281558  660445 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:22:20.324647  660445 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 11:22:20.324671  660445 start.go:496] detecting cgroup driver to use...
	I1206 11:22:20.324688  660445 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1206 11:22:20.324750  660445 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 11:22:20.349754  660445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 11:22:20.373399  660445 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:22:20.373487  660445 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:22:20.402915  660445 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:22:20.431081  660445 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:22:20.635542  660445 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:22:20.837100  660445 docker.go:234] disabling docker service ...
	I1206 11:22:20.837191  660445 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:22:20.879171  660445 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:22:20.895730  660445 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:22:21.108272  660445 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:22:21.303698  660445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:22:21.318362  660445 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:22:21.337490  660445 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 11:22:21.337599  660445 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:21.348622  660445 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 11:22:21.348743  660445 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:21.357721  660445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:21.366308  660445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:21.375624  660445 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:22:21.383838  660445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:21.400537  660445 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:21.421498  660445 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:21.436294  660445 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:22:21.444817  660445 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:22:21.455292  660445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:22:21.627946  660445 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 11:22:21.875422  660445 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 11:22:21.875538  660445 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 11:22:21.886447  660445 start.go:564] Will wait 60s for crictl version
	I1206 11:22:21.886555  660445 ssh_runner.go:195] Run: which crictl
	I1206 11:22:21.890573  660445 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:22:21.937391  660445 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 11:22:21.937514  660445 ssh_runner.go:195] Run: crio --version
	I1206 11:22:21.997843  660445 ssh_runner.go:195] Run: crio --version
	I1206 11:22:22.061371  660445 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 11:22:22.064277  660445 cli_runner.go:164] Run: docker network inspect force-systemd-env-163342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:22:22.095214  660445 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 11:22:22.099450  660445 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:22:22.112933  660445 kubeadm.go:884] updating cluster {Name:force-systemd-env-163342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-163342 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:22:22.113051  660445 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 11:22:22.113104  660445 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:22:22.167917  660445 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 11:22:22.167938  660445 crio.go:433] Images already preloaded, skipping extraction
	I1206 11:22:22.167990  660445 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:22:22.232097  660445 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 11:22:22.232117  660445 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:22:22.232125  660445 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1206 11:22:22.232211  660445 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-163342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-163342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:22:22.232295  660445 ssh_runner.go:195] Run: crio config
	I1206 11:22:22.383598  660445 cni.go:84] Creating CNI manager for ""
	I1206 11:22:22.383630  660445 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 11:22:22.383649  660445 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 11:22:22.383672  660445 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-163342 NodeName:force-systemd-env-163342 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:22:22.383811  660445 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-163342"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:22:22.383902  660445 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 11:22:22.395708  660445 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:22:22.395797  660445 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:22:22.411290  660445 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 11:22:22.425508  660445 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 11:22:22.449013  660445 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1206 11:22:22.465926  660445 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:22:22.471692  660445 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:22:22.485406  660445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:22:22.673170  660445 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:22:22.712553  660445 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342 for IP: 192.168.85.2
	I1206 11:22:22.712578  660445 certs.go:195] generating shared ca certs ...
	I1206 11:22:22.712595  660445 certs.go:227] acquiring lock for ca certs: {Name:mk654f77abd8383620ce6ddae56f2a6a8c1d96d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:22.712731  660445 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key
	I1206 11:22:22.712783  660445 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key
	I1206 11:22:22.712795  660445 certs.go:257] generating profile certs ...
	I1206 11:22:22.712852  660445 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/client.key
	I1206 11:22:22.712867  660445 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/client.crt with IP's: []
	I1206 11:22:23.093894  660445 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/client.crt ...
	I1206 11:22:23.093927  660445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/client.crt: {Name:mk29a360def36d00768aec66005155444e965c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:23.094154  660445 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/client.key ...
	I1206 11:22:23.094171  660445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/client.key: {Name:mk0b30dea4883e3fbb6cbaafc38f130913ccb3e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:23.094283  660445 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.key.626aa41a
	I1206 11:22:23.094303  660445 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.crt.626aa41a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 11:22:23.325649  660445 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.crt.626aa41a ...
	I1206 11:22:23.325682  660445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.crt.626aa41a: {Name:mk1d8bbd0aae516f33c82bce7ee6acd6c7994a03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:23.325860  660445 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.key.626aa41a ...
	I1206 11:22:23.325877  660445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.key.626aa41a: {Name:mk05b66e8e7ce836728e0b073b00b3d6952c0eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:23.325950  660445 certs.go:382] copying /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.crt.626aa41a -> /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.crt
	I1206 11:22:23.326039  660445 certs.go:386] copying /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.key.626aa41a -> /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.key
	I1206 11:22:23.326102  660445 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/proxy-client.key
	I1206 11:22:23.326121  660445 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/proxy-client.crt with IP's: []
	I1206 11:22:23.856458  660445 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/proxy-client.crt ...
	I1206 11:22:23.856491  660445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/proxy-client.crt: {Name:mkd1b93de2bb0bb06daa15078839fabd204ce7cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:23.856671  660445 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/proxy-client.key ...
	I1206 11:22:23.856689  660445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/proxy-client.key: {Name:mkc73b28d41c5777b52eeb107c26e8756349b4f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:23.856763  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 11:22:23.856791  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 11:22:23.856804  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 11:22:23.856822  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 11:22:23.856834  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 11:22:23.856867  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 11:22:23.856884  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 11:22:23.856900  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 11:22:23.856953  660445 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem (1338 bytes)
	W1206 11:22:23.857003  660445 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068_empty.pem, impossibly tiny 0 bytes
	I1206 11:22:23.857016  660445 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 11:22:23.857044  660445 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:22:23.857073  660445 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:22:23.857100  660445 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem (1675 bytes)
	I1206 11:22:23.857159  660445 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 11:22:23.857196  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem -> /usr/share/ca-certificates/488068.pem
	I1206 11:22:23.857216  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> /usr/share/ca-certificates/4880682.pem
	I1206 11:22:23.857239  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:22:23.857825  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:22:23.893745  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:22:23.922834  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:22:23.959461  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 11:22:23.988664  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1206 11:22:24.020397  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 11:22:24.039805  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:22:24.059293  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 11:22:24.078830  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem --> /usr/share/ca-certificates/488068.pem (1338 bytes)
	I1206 11:22:24.110992  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /usr/share/ca-certificates/4880682.pem (1708 bytes)
	I1206 11:22:24.144308  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:22:24.170440  660445 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:22:24.188494  660445 ssh_runner.go:195] Run: openssl version
	I1206 11:22:24.197872  660445 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:22:24.219678  660445 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:22:24.236399  660445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:22:24.248503  660445 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:22:24.248598  660445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:22:24.295267  660445 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:22:24.308291  660445 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 11:22:24.321215  660445 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488068.pem
	I1206 11:22:24.333009  660445 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488068.pem /etc/ssl/certs/488068.pem
	I1206 11:22:24.348461  660445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488068.pem
	I1206 11:22:24.356627  660445 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 11:22:24.356745  660445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488068.pem
	I1206 11:22:24.415167  660445 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:22:24.423281  660445 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/488068.pem /etc/ssl/certs/51391683.0
	I1206 11:22:20.299918  660500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:22:20.300045  660500 addons.go:530] duration metric: took 9.164657ms for enable addons: enabled=[]
	I1206 11:22:20.682860  660500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:22:20.711302  660500 node_ready.go:35] waiting up to 6m0s for node "pause-362686" to be "Ready" ...
	I1206 11:22:24.454078  660445 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4880682.pem
	I1206 11:22:24.469316  660445 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4880682.pem /etc/ssl/certs/4880682.pem
	I1206 11:22:24.487330  660445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4880682.pem
	I1206 11:22:24.497828  660445 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 11:22:24.497974  660445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4880682.pem
	I1206 11:22:24.565210  660445 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:22:24.573291  660445 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4880682.pem /etc/ssl/certs/3ec20f2e.0
	I1206 11:22:24.581304  660445 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:22:24.587568  660445 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 11:22:24.587668  660445 kubeadm.go:401] StartCluster: {Name:force-systemd-env-163342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-163342 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:22:24.587774  660445 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 11:22:24.587877  660445 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:22:24.656951  660445 cri.go:89] found id: ""
	I1206 11:22:24.657059  660445 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:22:24.666861  660445 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 11:22:24.676371  660445 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:22:24.676483  660445 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:22:24.690887  660445 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:22:24.690952  660445 kubeadm.go:158] found existing configuration files:
	
	I1206 11:22:24.691036  660445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:22:24.699431  660445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:22:24.699542  660445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:22:24.709521  660445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:22:24.718323  660445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:22:24.718460  660445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:22:24.728401  660445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:22:24.737046  660445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:22:24.737186  660445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:22:24.753393  660445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:22:24.763426  660445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:22:24.763541  660445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:22:24.770905  660445 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:22:24.852559  660445 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 11:22:24.852779  660445 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:22:24.902436  660445 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:22:24.902612  660445 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:22:24.902675  660445 kubeadm.go:319] OS: Linux
	I1206 11:22:24.902757  660445 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:22:24.902866  660445 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:22:24.902934  660445 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:22:24.903000  660445 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:22:24.903071  660445 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:22:24.903167  660445 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:22:24.903247  660445 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:22:24.903328  660445 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:22:24.903404  660445 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:22:25.028568  660445 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:22:25.028742  660445 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:22:25.028868  660445 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:22:25.043505  660445 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:22:25.048939  660445 out.go:252]   - Generating certificates and keys ...
	I1206 11:22:25.049116  660445 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:22:25.049210  660445 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:22:25.110873  660445 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 11:22:25.796129  660445 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 11:22:26.166878  660445 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 11:22:26.909344  660445 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 11:22:27.167453  660445 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 11:22:27.167600  660445 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-163342 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 11:22:27.366128  660445 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 11:22:27.366497  660445 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-163342 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 11:22:27.821243  660445 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 11:22:28.378804  660445 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 11:22:28.570920  660445 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 11:22:28.571187  660445 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:22:29.251624  660445 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:22:26.732511  660500 node_ready.go:49] node "pause-362686" is "Ready"
	I1206 11:22:26.732536  660500 node_ready.go:38] duration metric: took 6.021205356s for node "pause-362686" to be "Ready" ...
	I1206 11:22:26.732548  660500 api_server.go:52] waiting for apiserver process to appear ...
	I1206 11:22:26.732608  660500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:26.781476  660500 api_server.go:72] duration metric: took 6.490942169s to wait for apiserver process to appear ...
	I1206 11:22:26.781498  660500 api_server.go:88] waiting for apiserver healthz status ...
	I1206 11:22:26.781518  660500 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 11:22:26.803705  660500 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 11:22:26.803787  660500 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 11:22:27.282430  660500 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 11:22:27.305615  660500 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 11:22:27.305703  660500 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 11:22:27.782284  660500 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 11:22:27.799006  660500 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 11:22:27.799104  660500 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 11:22:28.281599  660500 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 11:22:28.291627  660500 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1206 11:22:28.292958  660500 api_server.go:141] control plane version: v1.34.2
	I1206 11:22:28.292980  660500 api_server.go:131] duration metric: took 1.511474397s to wait for apiserver health ...
	I1206 11:22:28.292989  660500 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 11:22:28.297925  660500 system_pods.go:59] 7 kube-system pods found
	I1206 11:22:28.297971  660500 system_pods.go:61] "coredns-66bc5c9577-fpnqh" [cee22ce0-0d6a-4f3d-8f27-76f52d094dcb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 11:22:28.297978  660500 system_pods.go:61] "etcd-pause-362686" [8995ba07-f227-41f3-bb7b-dd67ce35d0ee] Running
	I1206 11:22:28.297984  660500 system_pods.go:61] "kindnet-2xclh" [3cf5b95f-134e-4269-8c81-3a38b6f2a52d] Running
	I1206 11:22:28.297989  660500 system_pods.go:61] "kube-apiserver-pause-362686" [b2ca5685-08cf-4711-bd88-1184aa55260c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 11:22:28.297995  660500 system_pods.go:61] "kube-controller-manager-pause-362686" [ddcd8a5e-c280-40ad-9e1b-875496725a3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 11:22:28.298000  660500 system_pods.go:61] "kube-proxy-gjknk" [a4a94694-360a-43df-8d83-3139b6279e4c] Running
	I1206 11:22:28.298004  660500 system_pods.go:61] "kube-scheduler-pause-362686" [6fa2f3a6-e9b4-4716-b97f-2b4583a0e219] Running
	I1206 11:22:28.298009  660500 system_pods.go:74] duration metric: took 5.01501ms to wait for pod list to return data ...
	I1206 11:22:28.298021  660500 default_sa.go:34] waiting for default service account to be created ...
	I1206 11:22:28.300983  660500 default_sa.go:45] found service account: "default"
	I1206 11:22:28.301054  660500 default_sa.go:55] duration metric: took 3.025575ms for default service account to be created ...
	I1206 11:22:28.301080  660500 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 11:22:28.304711  660500 system_pods.go:86] 7 kube-system pods found
	I1206 11:22:28.304791  660500 system_pods.go:89] "coredns-66bc5c9577-fpnqh" [cee22ce0-0d6a-4f3d-8f27-76f52d094dcb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 11:22:28.304815  660500 system_pods.go:89] "etcd-pause-362686" [8995ba07-f227-41f3-bb7b-dd67ce35d0ee] Running
	I1206 11:22:28.304834  660500 system_pods.go:89] "kindnet-2xclh" [3cf5b95f-134e-4269-8c81-3a38b6f2a52d] Running
	I1206 11:22:28.304870  660500 system_pods.go:89] "kube-apiserver-pause-362686" [b2ca5685-08cf-4711-bd88-1184aa55260c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 11:22:28.304897  660500 system_pods.go:89] "kube-controller-manager-pause-362686" [ddcd8a5e-c280-40ad-9e1b-875496725a3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 11:22:28.304916  660500 system_pods.go:89] "kube-proxy-gjknk" [a4a94694-360a-43df-8d83-3139b6279e4c] Running
	I1206 11:22:28.304951  660500 system_pods.go:89] "kube-scheduler-pause-362686" [6fa2f3a6-e9b4-4716-b97f-2b4583a0e219] Running
	I1206 11:22:28.304976  660500 system_pods.go:126] duration metric: took 3.87579ms to wait for k8s-apps to be running ...
	I1206 11:22:28.305000  660500 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 11:22:28.305087  660500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:22:28.319554  660500 system_svc.go:56] duration metric: took 14.544115ms WaitForService to wait for kubelet
	I1206 11:22:28.319633  660500 kubeadm.go:587] duration metric: took 8.029103826s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 11:22:28.319668  660500 node_conditions.go:102] verifying NodePressure condition ...
	I1206 11:22:28.329651  660500 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1206 11:22:28.329732  660500 node_conditions.go:123] node cpu capacity is 2
	I1206 11:22:28.329760  660500 node_conditions.go:105] duration metric: took 10.073981ms to run NodePressure ...
	I1206 11:22:28.329786  660500 start.go:242] waiting for startup goroutines ...
	I1206 11:22:28.329827  660500 start.go:247] waiting for cluster config update ...
	I1206 11:22:28.329854  660500 start.go:256] writing updated cluster config ...
	I1206 11:22:28.330237  660500 ssh_runner.go:195] Run: rm -f paused
	I1206 11:22:28.333945  660500 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 11:22:28.334583  660500 kapi.go:59] client config for pause-362686: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/client.key", CAFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 11:22:28.338495  660500 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fpnqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:29.826372  660445 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:22:30.797906  660445 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:22:31.391559  660445 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:22:31.896678  660445 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:22:31.897697  660445 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:22:31.900243  660445 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:22:31.903660  660445 out.go:252]   - Booting up control plane ...
	I1206 11:22:31.903763  660445 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:22:31.903838  660445 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:22:31.903901  660445 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:22:31.926484  660445 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:22:31.926610  660445 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:22:31.933637  660445 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:22:31.934020  660445 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:22:31.934084  660445 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:22:32.064390  660445 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:22:32.064512  660445 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1206 11:22:30.351924  660500 pod_ready.go:104] pod "coredns-66bc5c9577-fpnqh" is not "Ready", error: <nil>
	I1206 11:22:31.848988  660500 pod_ready.go:94] pod "coredns-66bc5c9577-fpnqh" is "Ready"
	I1206 11:22:31.849014  660500 pod_ready.go:86] duration metric: took 3.510452608s for pod "coredns-66bc5c9577-fpnqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:31.855011  660500 pod_ready.go:83] waiting for pod "etcd-pause-362686" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:31.862771  660500 pod_ready.go:94] pod "etcd-pause-362686" is "Ready"
	I1206 11:22:31.862804  660500 pod_ready.go:86] duration metric: took 7.763724ms for pod "etcd-pause-362686" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:31.867527  660500 pod_ready.go:83] waiting for pod "kube-apiserver-pause-362686" in "kube-system" namespace to be "Ready" or be gone ...
	W1206 11:22:33.874826  660500 pod_ready.go:104] pod "kube-apiserver-pause-362686" is not "Ready", error: <nil>
	W1206 11:22:35.874868  660500 pod_ready.go:104] pod "kube-apiserver-pause-362686" is not "Ready", error: <nil>
	I1206 11:22:37.372167  660500 pod_ready.go:94] pod "kube-apiserver-pause-362686" is "Ready"
	I1206 11:22:37.372197  660500 pod_ready.go:86] duration metric: took 5.504563226s for pod "kube-apiserver-pause-362686" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:37.374239  660500 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-362686" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:37.381251  660500 pod_ready.go:94] pod "kube-controller-manager-pause-362686" is "Ready"
	I1206 11:22:37.381280  660500 pod_ready.go:86] duration metric: took 7.012239ms for pod "kube-controller-manager-pause-362686" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:37.383756  660500 pod_ready.go:83] waiting for pod "kube-proxy-gjknk" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:37.388591  660500 pod_ready.go:94] pod "kube-proxy-gjknk" is "Ready"
	I1206 11:22:37.388614  660500 pod_ready.go:86] duration metric: took 4.837364ms for pod "kube-proxy-gjknk" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:37.392895  660500 pod_ready.go:83] waiting for pod "kube-scheduler-pause-362686" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:37.644944  660500 pod_ready.go:94] pod "kube-scheduler-pause-362686" is "Ready"
	I1206 11:22:37.645027  660500 pod_ready.go:86] duration metric: took 252.101935ms for pod "kube-scheduler-pause-362686" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:37.645056  660500 pod_ready.go:40] duration metric: took 9.311037527s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 11:22:37.737023  660500 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1206 11:22:37.747188  660500 out.go:179] * Done! kubectl is now configured to use "pause-362686" cluster and "default" namespace by default
	I1206 11:22:36.083811  660445 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 4.01970701s
	I1206 11:22:36.087386  660445 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 11:22:36.087481  660445 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1206 11:22:36.087566  660445 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 11:22:36.087641  660445 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Dec 06 11:22:19 pause-362686 crio[2103]: time="2025-12-06T11:22:19.772605205Z" level=info msg="Started container" PID=2370 containerID=6e3cfbbf22515804d814e640a38a565801f45c2eb8911d6a2683c88b1e27721f description=kube-system/kube-apiserver-pause-362686/kube-apiserver id=1150ae8c-eff0-4cea-a60d-1812cfa79bec name=/runtime.v1.RuntimeService/StartContainer sandboxID=3ad607eb97380c593fdfc0133145d95eee6ed325f3145d0eed00fa812ac242fd
	Dec 06 11:22:19 pause-362686 crio[2103]: time="2025-12-06T11:22:19.780961001Z" level=info msg="Created container ebe67943652dca7cbe83a46bb2d819cac77f0ad4b644b12ca924ab6584a4bd63: kube-system/coredns-66bc5c9577-fpnqh/coredns" id=a2ff1d3c-770d-4694-af34-0e6a6410ca7d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 11:22:19 pause-362686 crio[2103]: time="2025-12-06T11:22:19.787593924Z" level=info msg="Starting container: ebe67943652dca7cbe83a46bb2d819cac77f0ad4b644b12ca924ab6584a4bd63" id=99186525-0cbb-4c58-a443-5b27e1986a3f name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 11:22:19 pause-362686 crio[2103]: time="2025-12-06T11:22:19.792159154Z" level=info msg="Started container" PID=2367 containerID=ebe67943652dca7cbe83a46bb2d819cac77f0ad4b644b12ca924ab6584a4bd63 description=kube-system/coredns-66bc5c9577-fpnqh/coredns id=99186525-0cbb-4c58-a443-5b27e1986a3f name=/runtime.v1.RuntimeService/StartContainer sandboxID=6d19fdcd8a5d0b367366a49d53d585c0412149bda945344c61f222ef59d977f4
	Dec 06 11:22:19 pause-362686 crio[2103]: time="2025-12-06T11:22:19.828591049Z" level=info msg="Created container 86668cc48342b6b06db4ce1d779a41d7855a18e4c0a057f86e1158cc5f6d5eda: kube-system/kindnet-2xclh/kindnet-cni" id=f214e09c-d4c5-489e-8ddc-4da5855d85da name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 11:22:19 pause-362686 crio[2103]: time="2025-12-06T11:22:19.832884793Z" level=info msg="Starting container: 86668cc48342b6b06db4ce1d779a41d7855a18e4c0a057f86e1158cc5f6d5eda" id=ec22a561-1bf6-43f3-ac85-6217d273f8d9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 11:22:19 pause-362686 crio[2103]: time="2025-12-06T11:22:19.834907334Z" level=info msg="Started container" PID=2383 containerID=86668cc48342b6b06db4ce1d779a41d7855a18e4c0a057f86e1158cc5f6d5eda description=kube-system/kindnet-2xclh/kindnet-cni id=ec22a561-1bf6-43f3-ac85-6217d273f8d9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9a604b6ff7ed248fa04303626569e2a9f043ab091d8700171dbdece4fd47417
	Dec 06 11:22:19 pause-362686 crio[2103]: time="2025-12-06T11:22:19.873604794Z" level=info msg="Created container a26bba6a17e6370c96bd7c1a6acad88e7c45321d94b62ab7fd35fd42563c6135: kube-system/kube-proxy-gjknk/kube-proxy" id=3dbf8187-1609-4cfd-aacc-dbd06196743f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 11:22:19 pause-362686 crio[2103]: time="2025-12-06T11:22:19.874305704Z" level=info msg="Starting container: a26bba6a17e6370c96bd7c1a6acad88e7c45321d94b62ab7fd35fd42563c6135" id=aabe82c1-4c76-49de-a6ac-02dbf03917d9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 11:22:19 pause-362686 crio[2103]: time="2025-12-06T11:22:19.888194752Z" level=info msg="Started container" PID=2347 containerID=a26bba6a17e6370c96bd7c1a6acad88e7c45321d94b62ab7fd35fd42563c6135 description=kube-system/kube-proxy-gjknk/kube-proxy id=aabe82c1-4c76-49de-a6ac-02dbf03917d9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6b7890de5806e5588f9195610c019f283a188e4ba50b5c6aa593845f09c1dc62
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.294142969Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.301867466Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.301905249Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.301931095Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.305230273Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.315154095Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.315266552Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.320749286Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.32091218Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.320981216Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.328840848Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.329015212Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.329098017Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.340386785Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.340428598Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	86668cc48342b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               1                   b9a604b6ff7ed       kindnet-2xclh                          kube-system
	ebe67943652dc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   21 seconds ago       Running             coredns                   1                   6d19fdcd8a5d0       coredns-66bc5c9577-fpnqh               kube-system
	6e3cfbbf22515       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   21 seconds ago       Running             kube-apiserver            1                   3ad607eb97380       kube-apiserver-pause-362686            kube-system
	a26bba6a17e63       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   21 seconds ago       Running             kube-proxy                1                   6b7890de5806e       kube-proxy-gjknk                       kube-system
	ff260a67303ff       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   21 seconds ago       Running             kube-controller-manager   1                   74638ed0b2609       kube-controller-manager-pause-362686   kube-system
	c453d81cb3c61       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   21 seconds ago       Running             etcd                      1                   2053bfa4e8164       etcd-pause-362686                      kube-system
	0fd9511997553       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   22 seconds ago       Running             kube-scheduler            1                   e029bde32796c       kube-scheduler-pause-362686            kube-system
	6d1b063b72f99       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   35 seconds ago       Exited              coredns                   0                   6d19fdcd8a5d0       coredns-66bc5c9577-fpnqh               kube-system
	941d38d4fe915       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   b9a604b6ff7ed       kindnet-2xclh                          kube-system
	a44e62d267f8f       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   6b7890de5806e       kube-proxy-gjknk                       kube-system
	edea99de7a794       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   74638ed0b2609       kube-controller-manager-pause-362686   kube-system
	a2bd67f169d22       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   2053bfa4e8164       etcd-pause-362686                      kube-system
	e5dcf878f0a2f       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   e029bde32796c       kube-scheduler-pause-362686            kube-system
	a978c34bc129a       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   3ad607eb97380       kube-apiserver-pause-362686            kube-system
	
	
	==> coredns [6d1b063b72f9938ca522120a5fbd763acc547f1a23d25c7fdabad14c548f5751] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36828 - 35279 "HINFO IN 6217268144750739585.1611302712944024103. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006432608s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ebe67943652dca7cbe83a46bb2d819cac77f0ad4b644b12ca924ab6584a4bd63] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44023 - 31970 "HINFO IN 1445320698905015370.8901211037546607024. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014312957s
	
	
	==> describe nodes <==
	Name:               pause-362686
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-362686
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=pause-362686
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T11_21_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 11:21:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-362686
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 11:22:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 11:22:05 +0000   Sat, 06 Dec 2025 11:21:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 11:22:05 +0000   Sat, 06 Dec 2025 11:21:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 11:22:05 +0000   Sat, 06 Dec 2025 11:21:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 11:22:05 +0000   Sat, 06 Dec 2025 11:22:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-362686
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                eac46063-1e56-43a9-8239-8ddd856377e9
	  Boot ID:                    e36fa5c9-4dd5-4964-a1e1-f5022a7b372f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-fpnqh                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     77s
	  kube-system                 etcd-pause-362686                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         86s
	  kube-system                 kindnet-2xclh                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      77s
	  kube-system                 kube-apiserver-pause-362686             250m (12%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-controller-manager-pause-362686    200m (10%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-proxy-gjknk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-pause-362686             100m (5%)     0 (0%)      0 (0%)           0 (0%)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 75s                kube-proxy       
	  Normal   Starting                 13s                kube-proxy       
	  Normal   NodeHasSufficientMemory  95s (x8 over 96s)  kubelet          Node pause-362686 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    95s (x8 over 96s)  kubelet          Node pause-362686 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     95s (x8 over 96s)  kubelet          Node pause-362686 status is now: NodeHasSufficientPID
	  Normal   Starting                 83s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 82s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  82s                kubelet          Node pause-362686 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    82s                kubelet          Node pause-362686 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     82s                kubelet          Node pause-362686 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           78s                node-controller  Node pause-362686 event: Registered Node pause-362686 in Controller
	  Normal   NodeReady                36s                kubelet          Node pause-362686 status is now: NodeReady
	  Normal   RegisteredNode           11s                node-controller  Node pause-362686 event: Registered Node pause-362686 in Controller
	
	
	==> dmesg <==
	[  +3.396905] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:59] overlayfs: idmapped layers are currently not supported
	[ +34.069943] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:00] overlayfs: idmapped layers are currently not supported
	[  +3.921462] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:01] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:02] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:03] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:08] overlayfs: idmapped layers are currently not supported
	[ +32.041559] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:11] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:12] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:13] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:14] overlayfs: idmapped layers are currently not supported
	[  +0.520412] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:15] overlayfs: idmapped layers are currently not supported
	[ +26.850323] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:16] overlayfs: idmapped layers are currently not supported
	[ +26.214447] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:17] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:19] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:21] overlayfs: idmapped layers are currently not supported
	[  +0.844232] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:22] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a2bd67f169d223a769428c661c985dc250fa2eb1f1d2f69b7452ba14c1cdaaf4] <==
	{"level":"warn","ts":"2025-12-06T11:21:11.314106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:21:11.371339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:21:11.459942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:21:11.524502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:21:11.559399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:21:11.625395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:21:11.843272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37934","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T11:22:11.388093Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-06T11:22:11.388149Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-362686","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-12-06T11:22:11.388243Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T11:22:11.939759Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T11:22:11.939840Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T11:22:11.939860Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-12-06T11:22:11.939956Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-06T11:22:11.939975Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-06T11:22:11.940199Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T11:22:11.940236Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T11:22:11.940245Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-06T11:22:11.940281Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T11:22:11.940296Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T11:22:11.940304Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T11:22:11.943294Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-12-06T11:22:11.943373Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T11:22:11.943399Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-06T11:22:11.943411Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-362686","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [c453d81cb3c615216b5765eff485bd7cf640ceb31d76bdc3dfd6a126ddd6e142] <==
	{"level":"warn","ts":"2025-12-06T11:22:24.224177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.239332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.273364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.289392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.324773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.339659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.355985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.374578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.392737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.439739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.467640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.508101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.526654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.559442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.597778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.617069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.637593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.654391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.707616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.743479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.785918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.830720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.872611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.905950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:25.046314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46506","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:22:41 up  4:05,  0 user,  load average: 4.51, 2.81, 2.25
	Linux pause-362686 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [86668cc48342b6b06db4ce1d779a41d7855a18e4c0a057f86e1158cc5f6d5eda] <==
	I1206 11:22:20.020071       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 11:22:20.020323       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1206 11:22:20.020810       1 main.go:148] setting mtu 1500 for CNI 
	I1206 11:22:20.020826       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 11:22:20.020842       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T11:22:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 11:22:20.293346       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 11:22:20.293411       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 11:22:20.293421       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 11:22:20.294138       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 11:22:27.001362       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 11:22:27.001398       1 metrics.go:72] Registering metrics
	I1206 11:22:27.001477       1 controller.go:711] "Syncing nftables rules"
	I1206 11:22:30.293655       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 11:22:30.293826       1 main.go:301] handling current node
	I1206 11:22:40.295668       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 11:22:40.295729       1 main.go:301] handling current node
	
	
	==> kindnet [941d38d4fe915ca06d5a8cc2dd6e1239af193b6889d323f017ed16e115e81d35] <==
	I1206 11:21:25.437908       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 11:21:25.444880       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1206 11:21:25.445054       1 main.go:148] setting mtu 1500 for CNI 
	I1206 11:21:25.445081       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 11:21:25.445094       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T11:21:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 11:21:25.667333       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 11:21:25.667429       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 11:21:25.667464       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 11:21:25.668237       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1206 11:21:55.668419       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1206 11:21:55.668418       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1206 11:21:55.668550       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1206 11:21:55.668649       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1206 11:21:57.367697       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 11:21:57.367731       1 metrics.go:72] Registering metrics
	I1206 11:21:57.367806       1 controller.go:711] "Syncing nftables rules"
	I1206 11:22:05.671225       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 11:22:05.671281       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6e3cfbbf22515804d814e640a38a565801f45c2eb8911d6a2683c88b1e27721f] <==
	I1206 11:22:26.819237       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1206 11:22:26.819329       1 policy_source.go:240] refreshing policies
	I1206 11:22:26.827197       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1206 11:22:26.828088       1 aggregator.go:171] initial CRD sync complete...
	I1206 11:22:26.828147       1 autoregister_controller.go:144] Starting autoregister controller
	I1206 11:22:26.828181       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 11:22:26.834063       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 11:22:26.841822       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1206 11:22:26.849259       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 11:22:26.849419       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 11:22:26.860728       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 11:22:26.867214       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 11:22:26.867416       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 11:22:26.867506       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1206 11:22:26.861136       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 11:22:26.895810       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 11:22:26.909952       1 cache.go:39] Caches are synced for LocalAvailability controller
	E1206 11:22:26.924935       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 11:22:26.932396       1 cache.go:39] Caches are synced for autoregister controller
	I1206 11:22:27.304007       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 11:22:28.819682       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 11:22:30.318657       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 11:22:30.342110       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 11:22:30.381438       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 11:22:30.533491       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [a978c34bc129a8093c51a8672e8d1d3c8a66e7e93bc4096a4ed9b46a5133bf24] <==
	W1206 11:22:11.425067       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.425106       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.425190       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.425254       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.426369       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.426494       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.426583       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.426666       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.429719       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.429788       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.429832       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.429870       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.429898       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.429928       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.429956       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.429979       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.430006       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.431235       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.431265       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.431290       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.431313       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.431337       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.433082       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.433206       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.433292       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [edea99de7a79435a14ae5bb6a539e81bf5c38079dc33137b11444b62b1de8815] <==
	I1206 11:21:23.267912       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1206 11:21:23.268011       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1206 11:21:23.268382       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 11:21:23.268928       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1206 11:21:23.269000       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1206 11:21:23.270063       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 11:21:23.270146       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 11:21:23.270298       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 11:21:23.271412       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1206 11:21:23.271493       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1206 11:21:23.278955       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 11:21:23.283586       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1206 11:21:23.287866       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 11:21:23.290324       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 11:21:23.290443       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 11:21:23.292305       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 11:21:23.292325       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 11:21:23.292183       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 11:21:23.292982       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 11:21:23.315747       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1206 11:21:23.315943       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1206 11:21:23.316062       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-362686"
	I1206 11:21:23.316148       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1206 11:21:23.338974       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 11:22:08.322006       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [ff260a67303ff3f7a8aa0797d085aa7948f99f8e8c90b67ee4407f01cd45e323] <==
	I1206 11:22:30.274165       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 11:22:30.275250       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 11:22:30.275326       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 11:22:30.278702       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 11:22:30.279476       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1206 11:22:30.279560       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 11:22:30.279601       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1206 11:22:30.282913       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 11:22:30.285589       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1206 11:22:30.288008       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1206 11:22:30.288167       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1206 11:22:30.288271       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-362686"
	I1206 11:22:30.288365       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1206 11:22:30.288429       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1206 11:22:30.296545       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1206 11:22:30.296684       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1206 11:22:30.296813       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 11:22:30.297448       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 11:22:30.302800       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 11:22:30.305034       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 11:22:30.302814       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1206 11:22:30.307885       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 11:22:30.324011       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1206 11:22:30.327251       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1206 11:22:30.327685       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [a26bba6a17e6370c96bd7c1a6acad88e7c45321d94b62ab7fd35fd42563c6135] <==
	I1206 11:22:23.953831       1 server_linux.go:53] "Using iptables proxy"
	I1206 11:22:24.896353       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 11:22:27.131338       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 11:22:27.153434       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1206 11:22:27.173534       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 11:22:27.859252       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 11:22:27.859408       1 server_linux.go:132] "Using iptables Proxier"
	I1206 11:22:27.884355       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 11:22:27.884762       1 server.go:527] "Version info" version="v1.34.2"
	I1206 11:22:27.884968       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 11:22:27.886246       1 config.go:200] "Starting service config controller"
	I1206 11:22:27.886370       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 11:22:27.886417       1 config.go:106] "Starting endpoint slice config controller"
	I1206 11:22:27.886445       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 11:22:27.886482       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 11:22:27.886509       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 11:22:27.887337       1 config.go:309] "Starting node config controller"
	I1206 11:22:27.887390       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 11:22:27.887418       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 11:22:27.987425       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 11:22:27.987524       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 11:22:27.987538       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [a44e62d267f8fee2c6800bbd3ace8990c75f30bbc3bb324584f31501e6d0b0e0] <==
	I1206 11:21:25.509497       1 server_linux.go:53] "Using iptables proxy"
	I1206 11:21:25.630085       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 11:21:25.750045       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 11:21:25.782763       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1206 11:21:25.795970       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 11:21:25.902462       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 11:21:25.902586       1 server_linux.go:132] "Using iptables Proxier"
	I1206 11:21:25.910046       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 11:21:25.910433       1 server.go:527] "Version info" version="v1.34.2"
	I1206 11:21:25.910671       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 11:21:25.912281       1 config.go:200] "Starting service config controller"
	I1206 11:21:25.912352       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 11:21:25.912394       1 config.go:106] "Starting endpoint slice config controller"
	I1206 11:21:25.912435       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 11:21:25.912495       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 11:21:25.912524       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 11:21:25.913284       1 config.go:309] "Starting node config controller"
	I1206 11:21:25.916065       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 11:21:25.916178       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 11:21:26.012879       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 11:21:26.012930       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 11:21:26.013011       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0fd951199755384d101f360f2a37416ef2791debea5e34742392446869de4356] <==
	I1206 11:22:24.142802       1 serving.go:386] Generated self-signed cert in-memory
	I1206 11:22:27.200010       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1206 11:22:27.211446       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 11:22:27.222858       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 11:22:27.222939       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1206 11:22:27.222959       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1206 11:22:27.222995       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 11:22:27.225406       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 11:22:27.225421       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 11:22:27.225455       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 11:22:27.225461       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 11:22:27.324504       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1206 11:22:27.325683       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 11:22:27.325783       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [e5dcf878f0a2fc09413f380ae032038a9f6a343f47a1c3939bf59537afe75948] <==
	E1206 11:21:16.924138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 11:21:16.924252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 11:21:16.924287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 11:21:16.924320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 11:21:16.924360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 11:21:16.924443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 11:21:16.924486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 11:21:16.924525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 11:21:16.941065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1206 11:21:16.942563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 11:21:16.942647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 11:21:16.942654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 11:21:16.942731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 11:21:16.942786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 11:21:16.942859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 11:21:16.942966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 11:21:16.943081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 11:21:16.943817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1206 11:21:18.023911       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 11:22:11.385200       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1206 11:22:11.385311       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1206 11:22:11.385323       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1206 11:22:11.385346       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 11:22:11.385592       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1206 11:22:11.385616       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 06 11:22:19 pause-362686 kubelet[1308]: E1206 11:22:19.521693    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-362686\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e379fa1648ef64a3c1d72bbf64384195" pod="kube-system/kube-scheduler-pause-362686"
	Dec 06 11:22:19 pause-362686 kubelet[1308]: E1206 11:22:19.522039    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-362686\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="3685b560b5fbcc25b6156ee63b418cd7" pod="kube-system/etcd-pause-362686"
	Dec 06 11:22:19 pause-362686 kubelet[1308]: I1206 11:22:19.526144    1308 scope.go:117] "RemoveContainer" containerID="6d1b063b72f9938ca522120a5fbd763acc547f1a23d25c7fdabad14c548f5751"
	Dec 06 11:22:19 pause-362686 kubelet[1308]: E1206 11:22:19.526820    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-362686\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e5f9392d751fb88f96ab0b53f361fb38" pod="kube-system/kube-controller-manager-pause-362686"
	Dec 06 11:22:19 pause-362686 kubelet[1308]: E1206 11:22:19.527069    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjknk\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a4a94694-360a-43df-8d83-3139b6279e4c" pod="kube-system/kube-proxy-gjknk"
	Dec 06 11:22:19 pause-362686 kubelet[1308]: E1206 11:22:19.527714    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-2xclh\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="3cf5b95f-134e-4269-8c81-3a38b6f2a52d" pod="kube-system/kindnet-2xclh"
	Dec 06 11:22:19 pause-362686 kubelet[1308]: E1206 11:22:19.528007    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-fpnqh\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="cee22ce0-0d6a-4f3d-8f27-76f52d094dcb" pod="kube-system/coredns-66bc5c9577-fpnqh"
	Dec 06 11:22:19 pause-362686 kubelet[1308]: E1206 11:22:19.528330    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-362686\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e379fa1648ef64a3c1d72bbf64384195" pod="kube-system/kube-scheduler-pause-362686"
	Dec 06 11:22:19 pause-362686 kubelet[1308]: E1206 11:22:19.528634    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-362686\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="3685b560b5fbcc25b6156ee63b418cd7" pod="kube-system/etcd-pause-362686"
	Dec 06 11:22:19 pause-362686 kubelet[1308]: E1206 11:22:19.528933    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-362686\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a4d04ad68b81ee4f3b5ceee13f7759de" pod="kube-system/kube-apiserver-pause-362686"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.544275    1308 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-362686\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.544811    1308 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-362686\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.547389    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-362686\" is forbidden: User \"system:node:pause-362686\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" podUID="e379fa1648ef64a3c1d72bbf64384195" pod="kube-system/kube-scheduler-pause-362686"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.548379    1308 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-362686\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.726102    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-362686\" is forbidden: User \"system:node:pause-362686\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" podUID="3685b560b5fbcc25b6156ee63b418cd7" pod="kube-system/etcd-pause-362686"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.741971    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-362686\" is forbidden: User \"system:node:pause-362686\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" podUID="a4d04ad68b81ee4f3b5ceee13f7759de" pod="kube-system/kube-apiserver-pause-362686"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.756293    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-362686\" is forbidden: User \"system:node:pause-362686\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" podUID="e5f9392d751fb88f96ab0b53f361fb38" pod="kube-system/kube-controller-manager-pause-362686"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.758147    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-gjknk\" is forbidden: User \"system:node:pause-362686\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" podUID="a4a94694-360a-43df-8d83-3139b6279e4c" pod="kube-system/kube-proxy-gjknk"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.760716    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-2xclh\" is forbidden: User \"system:node:pause-362686\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" podUID="3cf5b95f-134e-4269-8c81-3a38b6f2a52d" pod="kube-system/kindnet-2xclh"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.770854    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-fpnqh\" is forbidden: User \"system:node:pause-362686\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" podUID="cee22ce0-0d6a-4f3d-8f27-76f52d094dcb" pod="kube-system/coredns-66bc5c9577-fpnqh"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.780099    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-362686\" is forbidden: User \"system:node:pause-362686\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" podUID="e379fa1648ef64a3c1d72bbf64384195" pod="kube-system/kube-scheduler-pause-362686"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.827688    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-362686\" is forbidden: User \"system:node:pause-362686\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" podUID="3685b560b5fbcc25b6156ee63b418cd7" pod="kube-system/etcd-pause-362686"
	Dec 06 11:22:38 pause-362686 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 11:22:38 pause-362686 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 11:22:38 pause-362686 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-362686 -n pause-362686
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-362686 -n pause-362686: exit status 2 (530.220021ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-362686 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-362686
helpers_test.go:243: (dbg) docker inspect pause-362686:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "483762e5302b9484ed625e935e766406424f4fdbc289b3cf0996bdcaa496d591",
	        "Created": "2025-12-06T11:20:47.934707258Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 651285,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T11:20:48.037042539Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/483762e5302b9484ed625e935e766406424f4fdbc289b3cf0996bdcaa496d591/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/483762e5302b9484ed625e935e766406424f4fdbc289b3cf0996bdcaa496d591/hostname",
	        "HostsPath": "/var/lib/docker/containers/483762e5302b9484ed625e935e766406424f4fdbc289b3cf0996bdcaa496d591/hosts",
	        "LogPath": "/var/lib/docker/containers/483762e5302b9484ed625e935e766406424f4fdbc289b3cf0996bdcaa496d591/483762e5302b9484ed625e935e766406424f4fdbc289b3cf0996bdcaa496d591-json.log",
	        "Name": "/pause-362686",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-362686:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-362686",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "483762e5302b9484ed625e935e766406424f4fdbc289b3cf0996bdcaa496d591",
	                "LowerDir": "/var/lib/docker/overlay2/bdc2df94df6791590749f9bba5012f4cfbddcdc0cfd44e4029a643cd93129568-init/diff:/var/lib/docker/overlay2/cc06c0f1f442a7275dc247974ca9074508813cfb842de89bc5bb1dae1e824222/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bdc2df94df6791590749f9bba5012f4cfbddcdc0cfd44e4029a643cd93129568/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bdc2df94df6791590749f9bba5012f4cfbddcdc0cfd44e4029a643cd93129568/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bdc2df94df6791590749f9bba5012f4cfbddcdc0cfd44e4029a643cd93129568/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-362686",
	                "Source": "/var/lib/docker/volumes/pause-362686/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-362686",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-362686",
	                "name.minikube.sigs.k8s.io": "pause-362686",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8a3d6ab14467a7b64e43c83e1d7257b1e0a38d0b69abafec8e8823a9e24510a8",
	            "SandboxKey": "/var/run/docker/netns/8a3d6ab14467",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33378"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33379"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33382"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33380"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33381"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-362686": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:94:71:85:ff:aa",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4c3e6f047c35c81af103705fe8c66684615511c36ae2d343dff3df867f73b991",
	                    "EndpointID": "faf39f31a962d562ea795f3cfb0e101d297ce06c3976bd692d942db03118f821",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-362686",
	                        "483762e5302b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-362686 -n pause-362686
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-362686 -n pause-362686: exit status 2 (423.407944ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-362686 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-362686 logs -n 25: (1.864475684s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-334090 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo systemctl status docker --all --full --no-pager                                      │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo systemctl cat docker --no-pager                                                      │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo cat /etc/docker/daemon.json                                                          │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo docker system info                                                                   │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo cri-dockerd --version                                                                │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo systemctl cat containerd --no-pager                                                  │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo cat /etc/containerd/config.toml                                                      │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo containerd config dump                                                               │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo systemctl status crio --all --full --no-pager                                        │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo systemctl cat crio --no-pager                                                        │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ ssh     │ -p cilium-334090 sudo crio config                                                                          │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ delete  │ -p cilium-334090                                                                                           │ cilium-334090            │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │ 06 Dec 25 11:22 UTC │
	│ start   │ -p force-systemd-env-163342 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-163342 │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	│ start   │ -p pause-362686 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                           │ pause-362686             │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │ 06 Dec 25 11:22 UTC │
	│ pause   │ -p pause-362686 --alsologtostderr -v=5                                                                     │ pause-362686             │ jenkins │ v1.37.0 │ 06 Dec 25 11:22 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 11:22:09
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 11:22:09.713954  660500 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:22:09.714183  660500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:22:09.714205  660500 out.go:374] Setting ErrFile to fd 2...
	I1206 11:22:09.714225  660500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:22:09.714494  660500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 11:22:09.714890  660500 out.go:368] Setting JSON to false
	I1206 11:22:09.715838  660500 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":14681,"bootTime":1765005449,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 11:22:09.715934  660500 start.go:143] virtualization:  
	I1206 11:22:09.720606  660500 out.go:179] * [pause-362686] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:22:09.723627  660500 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 11:22:09.723720  660500 notify.go:221] Checking for updates...
	I1206 11:22:09.729271  660500 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:22:09.611414  660445 start.go:309] selected driver: docker
	I1206 11:22:09.611442  660445 start.go:927] validating driver "docker" against <nil>
	I1206 11:22:09.611458  660445 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:22:09.612221  660445 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:22:09.731876  660445 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:22:09.713518381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:22:09.732024  660445 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 11:22:09.732245  660445 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 11:22:09.735256  660445 out.go:179] * Using Docker driver with root privileges
	I1206 11:22:09.738113  660445 cni.go:84] Creating CNI manager for ""
	I1206 11:22:09.738183  660445 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 11:22:09.738197  660445 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 11:22:09.738289  660445 start.go:353] cluster config:
	{Name:force-systemd-env-163342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-163342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:22:09.738572  660500 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 11:22:09.741368  660445 out.go:179] * Starting "force-systemd-env-163342" primary control-plane node in "force-systemd-env-163342" cluster
	I1206 11:22:09.744198  660500 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 11:22:09.744200  660445 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 11:22:09.747118  660445 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:22:09.750629  660500 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:22:09.753867  660500 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:22:09.750163  660445 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 11:22:09.750210  660445 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1206 11:22:09.750222  660445 cache.go:65] Caching tarball of preloaded images
	I1206 11:22:09.750244  660445 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:22:09.750305  660445 preload.go:238] Found /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 11:22:09.750316  660445 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 11:22:09.750428  660445 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/config.json ...
	I1206 11:22:09.750450  660445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/config.json: {Name:mk9877e03bc9487c7b21a100fcf71b755d22f891 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:09.786759  660445 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:22:09.786779  660445 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:22:09.786798  660445 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:22:09.786831  660445 start.go:360] acquireMachinesLock for force-systemd-env-163342: {Name:mk95995c71845cbdbf4b572cde1795c098f3a698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:22:09.787050  660445 start.go:364] duration metric: took 202.516µs to acquireMachinesLock for "force-systemd-env-163342"
	I1206 11:22:09.787082  660445 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-163342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-163342 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 11:22:09.787193  660445 start.go:125] createHost starting for "" (driver="docker")
	I1206 11:22:09.757296  660500 config.go:182] Loaded profile config "pause-362686": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:22:09.758272  660500 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:22:09.798892  660500 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:22:09.799020  660500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:22:09.913491  660500 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:56 SystemTime:2025-12-06 11:22:09.90266796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:22:09.913597  660500 docker.go:319] overlay module found
	I1206 11:22:09.917074  660500 out.go:179] * Using the docker driver based on existing profile
	I1206 11:22:09.919986  660500 start.go:309] selected driver: docker
	I1206 11:22:09.920006  660500 start.go:927] validating driver "docker" against &{Name:pause-362686 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-362686 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:22:09.920213  660500 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:22:09.920316  660500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:22:09.996254  660500 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-06 11:22:09.986398254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:22:09.996641  660500 cni.go:84] Creating CNI manager for ""
	I1206 11:22:09.996686  660500 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 11:22:09.996728  660500 start.go:353] cluster config:
	{Name:pause-362686 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-362686 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:22:10.000714  660500 out.go:179] * Starting "pause-362686" primary control-plane node in "pause-362686" cluster
	I1206 11:22:10.004992  660500 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 11:22:10.011040  660500 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:22:10.013955  660500 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 11:22:10.014013  660500 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1206 11:22:10.014026  660500 cache.go:65] Caching tarball of preloaded images
	I1206 11:22:10.014121  660500 preload.go:238] Found /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 11:22:10.014132  660500 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 11:22:10.014283  660500 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/config.json ...
	I1206 11:22:10.014550  660500 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:22:10.047598  660500 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:22:10.047624  660500 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:22:10.047873  660500 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:22:10.047922  660500 start.go:360] acquireMachinesLock for pause-362686: {Name:mkc3fbfa0390357cdd29a7741a7c1c2215c4924f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:22:10.047994  660500 start.go:364] duration metric: took 43.093µs to acquireMachinesLock for "pause-362686"
	I1206 11:22:10.048014  660500 start.go:96] Skipping create...Using existing machine configuration
	I1206 11:22:10.048019  660500 fix.go:54] fixHost starting: 
	I1206 11:22:10.048299  660500 cli_runner.go:164] Run: docker container inspect pause-362686 --format={{.State.Status}}
	I1206 11:22:10.074133  660500 fix.go:112] recreateIfNeeded on pause-362686: state=Running err=<nil>
	W1206 11:22:10.074162  660500 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 11:22:09.794086  660445 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 11:22:09.794338  660445 start.go:159] libmachine.API.Create for "force-systemd-env-163342" (driver="docker")
	I1206 11:22:09.794370  660445 client.go:173] LocalClient.Create starting
	I1206 11:22:09.794424  660445 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem
	I1206 11:22:09.794455  660445 main.go:143] libmachine: Decoding PEM data...
	I1206 11:22:09.794469  660445 main.go:143] libmachine: Parsing certificate...
	I1206 11:22:09.794521  660445 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem
	I1206 11:22:09.794545  660445 main.go:143] libmachine: Decoding PEM data...
	I1206 11:22:09.794556  660445 main.go:143] libmachine: Parsing certificate...
	I1206 11:22:09.794917  660445 cli_runner.go:164] Run: docker network inspect force-systemd-env-163342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 11:22:09.819289  660445 cli_runner.go:211] docker network inspect force-systemd-env-163342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 11:22:09.819363  660445 network_create.go:284] running [docker network inspect force-systemd-env-163342] to gather additional debugging logs...
	I1206 11:22:09.819382  660445 cli_runner.go:164] Run: docker network inspect force-systemd-env-163342
	W1206 11:22:09.845165  660445 cli_runner.go:211] docker network inspect force-systemd-env-163342 returned with exit code 1
	I1206 11:22:09.845195  660445 network_create.go:287] error running [docker network inspect force-systemd-env-163342]: docker network inspect force-systemd-env-163342: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-163342 not found
	I1206 11:22:09.845210  660445 network_create.go:289] output of [docker network inspect force-systemd-env-163342]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-163342 not found
	
	** /stderr **
	I1206 11:22:09.845385  660445 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:22:09.862928  660445 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-194638dca10b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:a6:03:7b:5f:e6} reservation:<nil>}
	I1206 11:22:09.863272  660445 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f3d8d6011d33 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:f2:c6:89:02:f2} reservation:<nil>}
	I1206 11:22:09.863551  660445 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b83707b00b77 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:03:cb:6f:a3:46} reservation:<nil>}
	I1206 11:22:09.863840  660445 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4c3e6f047c35 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:8e:9b:68:4a:8c:06} reservation:<nil>}
	I1206 11:22:09.864258  660445 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e5080}
	I1206 11:22:09.864276  660445 network_create.go:124] attempt to create docker network force-systemd-env-163342 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 11:22:09.864329  660445 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-163342 force-systemd-env-163342
	I1206 11:22:09.941534  660445 network_create.go:108] docker network force-systemd-env-163342 192.168.85.0/24 created
	I1206 11:22:09.941568  660445 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-163342" container
	I1206 11:22:09.941653  660445 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 11:22:09.978817  660445 cli_runner.go:164] Run: docker volume create force-systemd-env-163342 --label name.minikube.sigs.k8s.io=force-systemd-env-163342 --label created_by.minikube.sigs.k8s.io=true
	I1206 11:22:10.012601  660445 oci.go:103] Successfully created a docker volume force-systemd-env-163342
	I1206 11:22:10.012812  660445 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-163342-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-163342 --entrypoint /usr/bin/test -v force-systemd-env-163342:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 11:22:10.670441  660445 oci.go:107] Successfully prepared a docker volume force-systemd-env-163342
	I1206 11:22:10.670515  660445 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 11:22:10.670525  660445 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 11:22:10.670589  660445 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-163342:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 11:22:14.193209  660445 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-163342:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.522572863s)
	I1206 11:22:14.193262  660445 kic.go:203] duration metric: took 3.522733303s to extract preloaded images to volume ...
	W1206 11:22:14.193418  660445 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 11:22:14.193540  660445 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 11:22:14.254490  660445 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-163342 --name force-systemd-env-163342 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-163342 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-163342 --network force-systemd-env-163342 --ip 192.168.85.2 --volume force-systemd-env-163342:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 11:22:10.079450  660500 out.go:252] * Updating the running docker "pause-362686" container ...
	I1206 11:22:10.079493  660500 machine.go:94] provisionDockerMachine start ...
	I1206 11:22:10.079578  660500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-362686
	I1206 11:22:10.100860  660500 main.go:143] libmachine: Using SSH client type: native
	I1206 11:22:10.107554  660500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33378 <nil> <nil>}
	I1206 11:22:10.107589  660500 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:22:10.299880  660500 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-362686
	
	I1206 11:22:10.299959  660500 ubuntu.go:182] provisioning hostname "pause-362686"
	I1206 11:22:10.300051  660500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-362686
	I1206 11:22:10.324940  660500 main.go:143] libmachine: Using SSH client type: native
	I1206 11:22:10.325364  660500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33378 <nil> <nil>}
	I1206 11:22:10.325377  660500 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-362686 && echo "pause-362686" | sudo tee /etc/hostname
	I1206 11:22:10.565160  660500 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-362686
	
	I1206 11:22:10.565286  660500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-362686
	I1206 11:22:10.587773  660500 main.go:143] libmachine: Using SSH client type: native
	I1206 11:22:10.588085  660500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33378 <nil> <nil>}
	I1206 11:22:10.588109  660500 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-362686' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-362686/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-362686' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:22:10.776538  660500 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:22:10.776567  660500 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-484819/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-484819/.minikube}
	I1206 11:22:10.776589  660500 ubuntu.go:190] setting up certificates
	I1206 11:22:10.776597  660500 provision.go:84] configureAuth start
	I1206 11:22:10.776657  660500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-362686
	I1206 11:22:10.807285  660500 provision.go:143] copyHostCerts
	I1206 11:22:10.807354  660500 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem, removing ...
	I1206 11:22:10.807374  660500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 11:22:10.807452  660500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem (1082 bytes)
	I1206 11:22:10.807561  660500 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem, removing ...
	I1206 11:22:10.807571  660500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 11:22:10.807599  660500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem (1123 bytes)
	I1206 11:22:10.807655  660500 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem, removing ...
	I1206 11:22:10.807670  660500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 11:22:10.807696  660500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem (1675 bytes)
	I1206 11:22:10.807754  660500 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem org=jenkins.pause-362686 san=[127.0.0.1 192.168.76.2 localhost minikube pause-362686]
	I1206 11:22:10.966620  660500 provision.go:177] copyRemoteCerts
	I1206 11:22:10.966740  660500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:22:10.966815  660500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-362686
	I1206 11:22:10.986201  660500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/pause-362686/id_rsa Username:docker}
	I1206 11:22:11.093147  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:22:11.117156  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1206 11:22:11.142785  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 11:22:11.166858  660500 provision.go:87] duration metric: took 390.23099ms to configureAuth
	I1206 11:22:11.166961  660500 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:22:11.167370  660500 config.go:182] Loaded profile config "pause-362686": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:22:11.167637  660500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-362686
	I1206 11:22:11.196859  660500 main.go:143] libmachine: Using SSH client type: native
	I1206 11:22:11.197206  660500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33378 <nil> <nil>}
	I1206 11:22:11.197236  660500 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 11:22:16.573645  660500 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 11:22:16.573669  660500 machine.go:97] duration metric: took 6.494167045s to provisionDockerMachine
	I1206 11:22:16.573702  660500 start.go:293] postStartSetup for "pause-362686" (driver="docker")
	I1206 11:22:16.573716  660500 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:22:16.573788  660500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:22:16.573847  660500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-362686
	I1206 11:22:16.592871  660500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/pause-362686/id_rsa Username:docker}
	I1206 11:22:16.698887  660500 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:22:16.702470  660500 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:22:16.702498  660500 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:22:16.702510  660500 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/addons for local assets ...
	I1206 11:22:16.702562  660500 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/files for local assets ...
	I1206 11:22:16.702648  660500 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> 4880682.pem in /etc/ssl/certs
	I1206 11:22:16.702760  660500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:22:16.710147  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 11:22:16.727241  660500 start.go:296] duration metric: took 153.51936ms for postStartSetup
	I1206 11:22:16.727322  660500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:22:16.727360  660500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-362686
	I1206 11:22:16.745204  660500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/pause-362686/id_rsa Username:docker}
	I1206 11:22:16.848945  660500 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:22:16.856109  660500 fix.go:56] duration metric: took 6.808080579s for fixHost
	I1206 11:22:16.856136  660500 start.go:83] releasing machines lock for "pause-362686", held for 6.808133289s
	I1206 11:22:16.856215  660500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-362686
	I1206 11:22:16.878911  660500 ssh_runner.go:195] Run: cat /version.json
	I1206 11:22:16.878968  660500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-362686
	I1206 11:22:16.878914  660500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:22:16.879282  660500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-362686
	I1206 11:22:16.904612  660500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/pause-362686/id_rsa Username:docker}
	I1206 11:22:16.909037  660500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/pause-362686/id_rsa Username:docker}
	I1206 11:22:17.097488  660500 ssh_runner.go:195] Run: systemctl --version
	I1206 11:22:17.104036  660500 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 11:22:17.144180  660500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:22:17.148801  660500 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:22:17.148894  660500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:22:17.156807  660500 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 11:22:17.156881  660500 start.go:496] detecting cgroup driver to use...
	I1206 11:22:17.156940  660500 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:22:17.157000  660500 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 11:22:17.172168  660500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 11:22:17.185619  660500 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:22:17.185752  660500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:22:17.201826  660500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:22:17.215702  660500 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:22:17.357942  660500 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:22:17.507061  660500 docker.go:234] disabling docker service ...
	I1206 11:22:17.507215  660500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:22:17.523514  660500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:22:17.537233  660500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:22:17.669832  660500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:22:17.808704  660500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:22:17.822618  660500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:22:17.837042  660500 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 11:22:17.837124  660500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:17.846457  660500 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 11:22:17.846527  660500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:17.855651  660500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:17.865265  660500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:17.874983  660500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:22:17.883356  660500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:17.893206  660500 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:17.902898  660500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:17.912908  660500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:22:17.920731  660500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:22:17.928225  660500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:22:18.077961  660500 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 11:22:18.333431  660500 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 11:22:18.333497  660500 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 11:22:18.337479  660500 start.go:564] Will wait 60s for crictl version
	I1206 11:22:18.337539  660500 ssh_runner.go:195] Run: which crictl
	I1206 11:22:18.342311  660500 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:22:18.376401  660500 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 11:22:18.376492  660500 ssh_runner.go:195] Run: crio --version
	I1206 11:22:18.411871  660500 ssh_runner.go:195] Run: crio --version
	I1206 11:22:18.468068  660500 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 11:22:14.546711  660445 cli_runner.go:164] Run: docker container inspect force-systemd-env-163342 --format={{.State.Running}}
	I1206 11:22:14.569427  660445 cli_runner.go:164] Run: docker container inspect force-systemd-env-163342 --format={{.State.Status}}
	I1206 11:22:14.594607  660445 cli_runner.go:164] Run: docker exec force-systemd-env-163342 stat /var/lib/dpkg/alternatives/iptables
	I1206 11:22:14.646152  660445 oci.go:144] the created container "force-systemd-env-163342" has a running status.
	I1206 11:22:14.646180  660445 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/force-systemd-env-163342/id_rsa...
	I1206 11:22:14.842448  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/force-systemd-env-163342/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1206 11:22:14.842507  660445 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-484819/.minikube/machines/force-systemd-env-163342/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 11:22:14.872736  660445 cli_runner.go:164] Run: docker container inspect force-systemd-env-163342 --format={{.State.Status}}
	I1206 11:22:14.905004  660445 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 11:22:14.905029  660445 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-163342 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 11:22:14.962727  660445 cli_runner.go:164] Run: docker container inspect force-systemd-env-163342 --format={{.State.Status}}
	I1206 11:22:14.987384  660445 machine.go:94] provisionDockerMachine start ...
	I1206 11:22:14.987489  660445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-163342
	I1206 11:22:15.014944  660445 main.go:143] libmachine: Using SSH client type: native
	I1206 11:22:15.015426  660445 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1206 11:22:15.015440  660445 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:22:15.016294  660445 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 11:22:18.174973  660445 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-163342
	
	I1206 11:22:18.175041  660445 ubuntu.go:182] provisioning hostname "force-systemd-env-163342"
	I1206 11:22:18.175154  660445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-163342
	I1206 11:22:18.199344  660445 main.go:143] libmachine: Using SSH client type: native
	I1206 11:22:18.199742  660445 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1206 11:22:18.199761  660445 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-163342 && echo "force-systemd-env-163342" | sudo tee /etc/hostname
	I1206 11:22:18.365153  660445 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-163342
	
	I1206 11:22:18.365279  660445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-163342
	I1206 11:22:18.384736  660445 main.go:143] libmachine: Using SSH client type: native
	I1206 11:22:18.385049  660445 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1206 11:22:18.385071  660445 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-163342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-163342/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-163342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:22:18.547852  660445 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:22:18.547875  660445 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-484819/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-484819/.minikube}
	I1206 11:22:18.547896  660445 ubuntu.go:190] setting up certificates
	I1206 11:22:18.547905  660445 provision.go:84] configureAuth start
	I1206 11:22:18.547971  660445 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-163342
	I1206 11:22:18.577171  660445 provision.go:143] copyHostCerts
	I1206 11:22:18.577225  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 11:22:18.577259  660445 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem, removing ...
	I1206 11:22:18.577266  660445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem
	I1206 11:22:18.577348  660445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/ca.pem (1082 bytes)
	I1206 11:22:18.577435  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 11:22:18.577453  660445 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem, removing ...
	I1206 11:22:18.577458  660445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem
	I1206 11:22:18.577483  660445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/cert.pem (1123 bytes)
	I1206 11:22:18.577530  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 11:22:18.577551  660445 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem, removing ...
	I1206 11:22:18.577555  660445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem
	I1206 11:22:18.577578  660445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-484819/.minikube/key.pem (1675 bytes)
	I1206 11:22:18.577632  660445 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-163342 san=[127.0.0.1 192.168.85.2 force-systemd-env-163342 localhost minikube]
	I1206 11:22:18.911768  660445 provision.go:177] copyRemoteCerts
	I1206 11:22:18.911876  660445 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:22:18.911934  660445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-163342
	I1206 11:22:18.938444  660445 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/force-systemd-env-163342/id_rsa Username:docker}
	I1206 11:22:19.048874  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 11:22:19.048932  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:22:19.071115  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 11:22:19.071242  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1206 11:22:19.103494  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 11:22:19.103550  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 11:22:19.125523  660445 provision.go:87] duration metric: took 577.596471ms to configureAuth
	I1206 11:22:19.125567  660445 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:22:19.125748  660445 config.go:182] Loaded profile config "force-systemd-env-163342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:22:19.125861  660445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-163342
	I1206 11:22:19.145717  660445 main.go:143] libmachine: Using SSH client type: native
	I1206 11:22:19.146035  660445 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1206 11:22:19.146053  660445 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 11:22:18.471186  660500 cli_runner.go:164] Run: docker network inspect pause-362686 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:22:18.488382  660500 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1206 11:22:18.492676  660500 kubeadm.go:884] updating cluster {Name:pause-362686 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-362686 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:22:18.492815  660500 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 11:22:18.492865  660500 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:22:18.531606  660500 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 11:22:18.531630  660500 crio.go:433] Images already preloaded, skipping extraction
	I1206 11:22:18.531695  660500 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:22:18.570333  660500 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 11:22:18.570362  660500 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:22:18.570371  660500 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1206 11:22:18.570470  660500 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-362686 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-362686 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:22:18.570563  660500 ssh_runner.go:195] Run: crio config
	I1206 11:22:18.659080  660500 cni.go:84] Creating CNI manager for ""
	I1206 11:22:18.659178  660500 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 11:22:18.659243  660500 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 11:22:18.659290  660500 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-362686 NodeName:pause-362686 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:22:18.659439  660500 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-362686"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:22:18.659557  660500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 11:22:18.667898  660500 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:22:18.668021  660500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:22:18.675827  660500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1206 11:22:18.693615  660500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 11:22:18.707963  660500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1206 11:22:18.724726  660500 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:22:18.728923  660500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:22:18.892175  660500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:22:18.907441  660500 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686 for IP: 192.168.76.2
	I1206 11:22:18.907459  660500 certs.go:195] generating shared ca certs ...
	I1206 11:22:18.907479  660500 certs.go:227] acquiring lock for ca certs: {Name:mk654f77abd8383620ce6ddae56f2a6a8c1d96d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:18.907605  660500 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key
	I1206 11:22:18.907647  660500 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key
	I1206 11:22:18.907655  660500 certs.go:257] generating profile certs ...
	I1206 11:22:18.907737  660500 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/client.key
	I1206 11:22:18.907802  660500 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/apiserver.key.d90920fb
	I1206 11:22:18.907847  660500 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/proxy-client.key
	I1206 11:22:18.907954  660500 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem (1338 bytes)
	W1206 11:22:18.907983  660500 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068_empty.pem, impossibly tiny 0 bytes
	I1206 11:22:18.907991  660500 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 11:22:18.908021  660500 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:22:18.908050  660500 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:22:18.908077  660500 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem (1675 bytes)
	I1206 11:22:18.908124  660500 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 11:22:18.908701  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:22:18.928076  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:22:18.958817  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:22:18.981756  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 11:22:19.004320  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 11:22:19.025824  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 11:22:19.048382  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:22:19.070810  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 11:22:19.095908  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:22:19.119510  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem --> /usr/share/ca-certificates/488068.pem (1338 bytes)
	I1206 11:22:19.146661  660500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /usr/share/ca-certificates/4880682.pem (1708 bytes)
	I1206 11:22:19.170674  660500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:22:19.187209  660500 ssh_runner.go:195] Run: openssl version
	I1206 11:22:19.193663  660500 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:22:19.205616  660500 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:22:19.224961  660500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:22:19.233376  660500 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:22:19.233457  660500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:22:19.278227  660500 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:22:19.286356  660500 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488068.pem
	I1206 11:22:19.294282  660500 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488068.pem /etc/ssl/certs/488068.pem
	I1206 11:22:19.302083  660500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488068.pem
	I1206 11:22:19.305945  660500 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 11:22:19.306018  660500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488068.pem
	I1206 11:22:19.361843  660500 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:22:19.369362  660500 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4880682.pem
	I1206 11:22:19.380225  660500 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4880682.pem /etc/ssl/certs/4880682.pem
	I1206 11:22:19.388977  660500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4880682.pem
	I1206 11:22:19.394105  660500 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 11:22:19.394180  660500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4880682.pem
	I1206 11:22:19.457994  660500 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:22:19.480476  660500 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:22:19.489683  660500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 11:22:19.586688  660500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 11:22:19.739401  660500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 11:22:19.863483  660500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 11:22:19.984657  660500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 11:22:20.069889  660500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 11:22:20.139959  660500 kubeadm.go:401] StartCluster: {Name:pause-362686 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-362686 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:22:20.140094  660500 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 11:22:20.140155  660500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:22:20.186882  660500 cri.go:89] found id: "86668cc48342b6b06db4ce1d779a41d7855a18e4c0a057f86e1158cc5f6d5eda"
	I1206 11:22:20.186910  660500 cri.go:89] found id: "ebe67943652dca7cbe83a46bb2d819cac77f0ad4b644b12ca924ab6584a4bd63"
	I1206 11:22:20.186915  660500 cri.go:89] found id: "6e3cfbbf22515804d814e640a38a565801f45c2eb8911d6a2683c88b1e27721f"
	I1206 11:22:20.186919  660500 cri.go:89] found id: "a26bba6a17e6370c96bd7c1a6acad88e7c45321d94b62ab7fd35fd42563c6135"
	I1206 11:22:20.186922  660500 cri.go:89] found id: "ff260a67303ff3f7a8aa0797d085aa7948f99f8e8c90b67ee4407f01cd45e323"
	I1206 11:22:20.186926  660500 cri.go:89] found id: "c453d81cb3c615216b5765eff485bd7cf640ceb31d76bdc3dfd6a126ddd6e142"
	I1206 11:22:20.186929  660500 cri.go:89] found id: "0fd951199755384d101f360f2a37416ef2791debea5e34742392446869de4356"
	I1206 11:22:20.186932  660500 cri.go:89] found id: "6d1b063b72f9938ca522120a5fbd763acc547f1a23d25c7fdabad14c548f5751"
	I1206 11:22:20.186936  660500 cri.go:89] found id: "941d38d4fe915ca06d5a8cc2dd6e1239af193b6889d323f017ed16e115e81d35"
	I1206 11:22:20.186943  660500 cri.go:89] found id: "a44e62d267f8fee2c6800bbd3ace8990c75f30bbc3bb324584f31501e6d0b0e0"
	I1206 11:22:20.186946  660500 cri.go:89] found id: "edea99de7a79435a14ae5bb6a539e81bf5c38079dc33137b11444b62b1de8815"
	I1206 11:22:20.186949  660500 cri.go:89] found id: "a2bd67f169d223a769428c661c985dc250fa2eb1f1d2f69b7452ba14c1cdaaf4"
	I1206 11:22:20.186953  660500 cri.go:89] found id: "e5dcf878f0a2fc09413f380ae032038a9f6a343f47a1c3939bf59537afe75948"
	I1206 11:22:20.186956  660500 cri.go:89] found id: "a978c34bc129a8093c51a8672e8d1d3c8a66e7e93bc4096a4ed9b46a5133bf24"
	I1206 11:22:20.186959  660500 cri.go:89] found id: ""
	I1206 11:22:20.187006  660500 ssh_runner.go:195] Run: sudo runc list -f json
	W1206 11:22:20.207676  660500 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T11:22:20Z" level=error msg="open /run/runc: no such file or directory"
	I1206 11:22:20.207761  660500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:22:20.240156  660500 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 11:22:20.240180  660500 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 11:22:20.240245  660500 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 11:22:20.263634  660500 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:22:20.264275  660500 kubeconfig.go:125] found "pause-362686" server: "https://192.168.76.2:8443"
	I1206 11:22:20.264833  660500 kapi.go:59] client config for pause-362686: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/client.key", CAFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 11:22:20.265344  660500 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 11:22:20.265365  660500 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 11:22:20.265371  660500 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 11:22:20.265381  660500 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 11:22:20.265392  660500 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 11:22:20.265637  660500 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 11:22:20.289509  660500 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1206 11:22:20.289546  660500 kubeadm.go:602] duration metric: took 49.358622ms to restartPrimaryControlPlane
	I1206 11:22:20.289556  660500 kubeadm.go:403] duration metric: took 149.607989ms to StartCluster
	I1206 11:22:20.289580  660500 settings.go:142] acquiring lock: {Name:mk7eec112652eae38dac4afce804445d9092bd29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:20.289640  660500 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 11:22:20.290277  660500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/kubeconfig: {Name:mk884a72161ed5cd0cfdbffc4a21f277282d705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:20.290497  660500 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 11:22:20.290831  660500 config.go:182] Loaded profile config "pause-362686": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:22:20.290878  660500 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 11:22:20.295656  660500 out.go:179] * Verifying Kubernetes components...
	I1206 11:22:20.295758  660500 out.go:179] * Enabled addons: 
	I1206 11:22:19.499535  660445 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 11:22:19.499611  660445 machine.go:97] duration metric: took 4.512202287s to provisionDockerMachine
	I1206 11:22:19.499637  660445 client.go:176] duration metric: took 9.70526029s to LocalClient.Create
	I1206 11:22:19.499685  660445 start.go:167] duration metric: took 9.705347205s to libmachine.API.Create "force-systemd-env-163342"
	I1206 11:22:19.499716  660445 start.go:293] postStartSetup for "force-systemd-env-163342" (driver="docker")
	I1206 11:22:19.499742  660445 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:22:19.499841  660445 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:22:19.499904  660445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-163342
	I1206 11:22:19.532950  660445 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/force-systemd-env-163342/id_rsa Username:docker}
	I1206 11:22:19.652591  660445 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:22:19.659573  660445 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:22:19.659603  660445 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:22:19.659614  660445 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/addons for local assets ...
	I1206 11:22:19.659665  660445 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-484819/.minikube/files for local assets ...
	I1206 11:22:19.659747  660445 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> 4880682.pem in /etc/ssl/certs
	I1206 11:22:19.659754  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> /etc/ssl/certs/4880682.pem
	I1206 11:22:19.659849  660445 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:22:19.673751  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 11:22:19.706639  660445 start.go:296] duration metric: took 206.894773ms for postStartSetup
	I1206 11:22:19.707053  660445 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-163342
	I1206 11:22:19.726131  660445 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/config.json ...
	I1206 11:22:19.726399  660445 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:22:19.726446  660445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-163342
	I1206 11:22:19.760055  660445 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/force-systemd-env-163342/id_rsa Username:docker}
	I1206 11:22:19.882898  660445 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:22:19.890659  660445 start.go:128] duration metric: took 10.103451479s to createHost
	I1206 11:22:19.890682  660445 start.go:83] releasing machines lock for "force-systemd-env-163342", held for 10.103619722s
	I1206 11:22:19.890752  660445 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-163342
	I1206 11:22:19.914827  660445 ssh_runner.go:195] Run: cat /version.json
	I1206 11:22:19.914878  660445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-163342
	I1206 11:22:19.915105  660445 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:22:19.915198  660445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-163342
	I1206 11:22:19.941652  660445 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/force-systemd-env-163342/id_rsa Username:docker}
	I1206 11:22:19.955816  660445 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/force-systemd-env-163342/id_rsa Username:docker}
	I1206 11:22:20.070453  660445 ssh_runner.go:195] Run: systemctl --version
	I1206 11:22:20.193429  660445 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 11:22:20.272374  660445 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:22:20.281474  660445 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:22:20.281558  660445 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:22:20.324647  660445 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 11:22:20.324671  660445 start.go:496] detecting cgroup driver to use...
	I1206 11:22:20.324688  660445 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1206 11:22:20.324750  660445 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 11:22:20.349754  660445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 11:22:20.373399  660445 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:22:20.373487  660445 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:22:20.402915  660445 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:22:20.431081  660445 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:22:20.635542  660445 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:22:20.837100  660445 docker.go:234] disabling docker service ...
	I1206 11:22:20.837191  660445 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:22:20.879171  660445 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:22:20.895730  660445 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:22:21.108272  660445 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:22:21.303698  660445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:22:21.318362  660445 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:22:21.337490  660445 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 11:22:21.337599  660445 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:21.348622  660445 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 11:22:21.348743  660445 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:21.357721  660445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:21.366308  660445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:21.375624  660445 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:22:21.383838  660445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:21.400537  660445 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:21.421498  660445 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:22:21.436294  660445 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:22:21.444817  660445 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:22:21.455292  660445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:22:21.627946  660445 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 11:22:21.875422  660445 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 11:22:21.875538  660445 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 11:22:21.886447  660445 start.go:564] Will wait 60s for crictl version
	I1206 11:22:21.886555  660445 ssh_runner.go:195] Run: which crictl
	I1206 11:22:21.890573  660445 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:22:21.937391  660445 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 11:22:21.937514  660445 ssh_runner.go:195] Run: crio --version
	I1206 11:22:21.997843  660445 ssh_runner.go:195] Run: crio --version
	I1206 11:22:22.061371  660445 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 11:22:22.064277  660445 cli_runner.go:164] Run: docker network inspect force-systemd-env-163342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:22:22.095214  660445 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 11:22:22.099450  660445 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:22:22.112933  660445 kubeadm.go:884] updating cluster {Name:force-systemd-env-163342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-163342 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:22:22.113051  660445 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 11:22:22.113104  660445 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:22:22.167917  660445 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 11:22:22.167938  660445 crio.go:433] Images already preloaded, skipping extraction
	I1206 11:22:22.167990  660445 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:22:22.232097  660445 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 11:22:22.232117  660445 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:22:22.232125  660445 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1206 11:22:22.232211  660445 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-163342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-163342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:22:22.232295  660445 ssh_runner.go:195] Run: crio config
	I1206 11:22:22.383598  660445 cni.go:84] Creating CNI manager for ""
	I1206 11:22:22.383630  660445 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 11:22:22.383649  660445 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 11:22:22.383672  660445 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-163342 NodeName:force-systemd-env-163342 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:22:22.383811  660445 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-163342"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:22:22.383902  660445 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 11:22:22.395708  660445 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:22:22.395797  660445 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:22:22.411290  660445 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 11:22:22.425508  660445 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 11:22:22.449013  660445 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1206 11:22:22.465926  660445 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:22:22.471692  660445 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:22:22.485406  660445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:22:22.673170  660445 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:22:22.712553  660445 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342 for IP: 192.168.85.2
	I1206 11:22:22.712578  660445 certs.go:195] generating shared ca certs ...
	I1206 11:22:22.712595  660445 certs.go:227] acquiring lock for ca certs: {Name:mk654f77abd8383620ce6ddae56f2a6a8c1d96d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:22.712731  660445 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key
	I1206 11:22:22.712783  660445 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key
	I1206 11:22:22.712795  660445 certs.go:257] generating profile certs ...
	I1206 11:22:22.712852  660445 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/client.key
	I1206 11:22:22.712867  660445 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/client.crt with IP's: []
	I1206 11:22:23.093894  660445 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/client.crt ...
	I1206 11:22:23.093927  660445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/client.crt: {Name:mk29a360def36d00768aec66005155444e965c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:23.094154  660445 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/client.key ...
	I1206 11:22:23.094171  660445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/client.key: {Name:mk0b30dea4883e3fbb6cbaafc38f130913ccb3e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:23.094283  660445 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.key.626aa41a
	I1206 11:22:23.094303  660445 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.crt.626aa41a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 11:22:23.325649  660445 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.crt.626aa41a ...
	I1206 11:22:23.325682  660445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.crt.626aa41a: {Name:mk1d8bbd0aae516f33c82bce7ee6acd6c7994a03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:23.325860  660445 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.key.626aa41a ...
	I1206 11:22:23.325877  660445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.key.626aa41a: {Name:mk05b66e8e7ce836728e0b073b00b3d6952c0eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:23.325950  660445 certs.go:382] copying /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.crt.626aa41a -> /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.crt
	I1206 11:22:23.326039  660445 certs.go:386] copying /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.key.626aa41a -> /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.key
	I1206 11:22:23.326102  660445 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/proxy-client.key
	I1206 11:22:23.326121  660445 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/proxy-client.crt with IP's: []
	I1206 11:22:23.856458  660445 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/proxy-client.crt ...
	I1206 11:22:23.856491  660445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/proxy-client.crt: {Name:mkd1b93de2bb0bb06daa15078839fabd204ce7cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:23.856671  660445 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/proxy-client.key ...
	I1206 11:22:23.856689  660445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/proxy-client.key: {Name:mkc73b28d41c5777b52eeb107c26e8756349b4f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:22:23.856763  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 11:22:23.856791  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 11:22:23.856804  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 11:22:23.856822  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 11:22:23.856834  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 11:22:23.856867  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 11:22:23.856884  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 11:22:23.856900  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 11:22:23.856953  660445 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem (1338 bytes)
	W1206 11:22:23.857003  660445 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068_empty.pem, impossibly tiny 0 bytes
	I1206 11:22:23.857016  660445 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 11:22:23.857044  660445 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:22:23.857073  660445 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:22:23.857100  660445 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/key.pem (1675 bytes)
	I1206 11:22:23.857159  660445 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem (1708 bytes)
	I1206 11:22:23.857196  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem -> /usr/share/ca-certificates/488068.pem
	I1206 11:22:23.857216  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem -> /usr/share/ca-certificates/4880682.pem
	I1206 11:22:23.857239  660445 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:22:23.857825  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:22:23.893745  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:22:23.922834  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:22:23.959461  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 11:22:23.988664  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1206 11:22:24.020397  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 11:22:24.039805  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:22:24.059293  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/force-systemd-env-163342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 11:22:24.078830  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/certs/488068.pem --> /usr/share/ca-certificates/488068.pem (1338 bytes)
	I1206 11:22:24.110992  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/ssl/certs/4880682.pem --> /usr/share/ca-certificates/4880682.pem (1708 bytes)
	I1206 11:22:24.144308  660445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:22:24.170440  660445 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:22:24.188494  660445 ssh_runner.go:195] Run: openssl version
	I1206 11:22:24.197872  660445 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:22:24.219678  660445 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:22:24.236399  660445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:22:24.248503  660445 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:22:24.248598  660445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:22:24.295267  660445 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:22:24.308291  660445 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 11:22:24.321215  660445 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488068.pem
	I1206 11:22:24.333009  660445 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488068.pem /etc/ssl/certs/488068.pem
	I1206 11:22:24.348461  660445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488068.pem
	I1206 11:22:24.356627  660445 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:21 /usr/share/ca-certificates/488068.pem
	I1206 11:22:24.356745  660445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488068.pem
	I1206 11:22:24.415167  660445 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:22:24.423281  660445 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/488068.pem /etc/ssl/certs/51391683.0
	I1206 11:22:20.299918  660500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:22:20.300045  660500 addons.go:530] duration metric: took 9.164657ms for enable addons: enabled=[]
	I1206 11:22:20.682860  660500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:22:20.711302  660500 node_ready.go:35] waiting up to 6m0s for node "pause-362686" to be "Ready" ...
	I1206 11:22:24.454078  660445 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4880682.pem
	I1206 11:22:24.469316  660445 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4880682.pem /etc/ssl/certs/4880682.pem
	I1206 11:22:24.487330  660445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4880682.pem
	I1206 11:22:24.497828  660445 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:21 /usr/share/ca-certificates/4880682.pem
	I1206 11:22:24.497974  660445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4880682.pem
	I1206 11:22:24.565210  660445 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:22:24.573291  660445 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4880682.pem /etc/ssl/certs/3ec20f2e.0
	I1206 11:22:24.581304  660445 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:22:24.587568  660445 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 11:22:24.587668  660445 kubeadm.go:401] StartCluster: {Name:force-systemd-env-163342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-163342 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:22:24.587774  660445 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 11:22:24.587877  660445 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:22:24.656951  660445 cri.go:89] found id: ""
	I1206 11:22:24.657059  660445 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:22:24.666861  660445 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 11:22:24.676371  660445 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:22:24.676483  660445 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:22:24.690887  660445 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:22:24.690952  660445 kubeadm.go:158] found existing configuration files:
	
	I1206 11:22:24.691036  660445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:22:24.699431  660445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:22:24.699542  660445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:22:24.709521  660445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:22:24.718323  660445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:22:24.718460  660445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:22:24.728401  660445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:22:24.737046  660445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:22:24.737186  660445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:22:24.753393  660445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:22:24.763426  660445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:22:24.763541  660445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:22:24.770905  660445 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:22:24.852559  660445 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 11:22:24.852779  660445 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:22:24.902436  660445 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:22:24.902612  660445 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:22:24.902675  660445 kubeadm.go:319] OS: Linux
	I1206 11:22:24.902757  660445 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:22:24.902866  660445 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:22:24.902934  660445 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:22:24.903000  660445 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:22:24.903071  660445 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:22:24.903167  660445 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:22:24.903247  660445 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:22:24.903328  660445 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:22:24.903404  660445 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:22:25.028568  660445 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:22:25.028742  660445 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:22:25.028868  660445 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:22:25.043505  660445 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:22:25.048939  660445 out.go:252]   - Generating certificates and keys ...
	I1206 11:22:25.049116  660445 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:22:25.049210  660445 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:22:25.110873  660445 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 11:22:25.796129  660445 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 11:22:26.166878  660445 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 11:22:26.909344  660445 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 11:22:27.167453  660445 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 11:22:27.167600  660445 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-163342 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 11:22:27.366128  660445 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 11:22:27.366497  660445 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-163342 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 11:22:27.821243  660445 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 11:22:28.378804  660445 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 11:22:28.570920  660445 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 11:22:28.571187  660445 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:22:29.251624  660445 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:22:26.732511  660500 node_ready.go:49] node "pause-362686" is "Ready"
	I1206 11:22:26.732536  660500 node_ready.go:38] duration metric: took 6.021205356s for node "pause-362686" to be "Ready" ...
	I1206 11:22:26.732548  660500 api_server.go:52] waiting for apiserver process to appear ...
	I1206 11:22:26.732608  660500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:26.781476  660500 api_server.go:72] duration metric: took 6.490942169s to wait for apiserver process to appear ...
	I1206 11:22:26.781498  660500 api_server.go:88] waiting for apiserver healthz status ...
	I1206 11:22:26.781518  660500 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 11:22:26.803705  660500 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 11:22:26.803787  660500 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 11:22:27.282430  660500 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 11:22:27.305615  660500 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 11:22:27.305703  660500 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 11:22:27.782284  660500 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 11:22:27.799006  660500 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 11:22:27.799104  660500 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 11:22:28.281599  660500 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 11:22:28.291627  660500 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1206 11:22:28.292958  660500 api_server.go:141] control plane version: v1.34.2
	I1206 11:22:28.292980  660500 api_server.go:131] duration metric: took 1.511474397s to wait for apiserver health ...
	I1206 11:22:28.292989  660500 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 11:22:28.297925  660500 system_pods.go:59] 7 kube-system pods found
	I1206 11:22:28.297971  660500 system_pods.go:61] "coredns-66bc5c9577-fpnqh" [cee22ce0-0d6a-4f3d-8f27-76f52d094dcb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 11:22:28.297978  660500 system_pods.go:61] "etcd-pause-362686" [8995ba07-f227-41f3-bb7b-dd67ce35d0ee] Running
	I1206 11:22:28.297984  660500 system_pods.go:61] "kindnet-2xclh" [3cf5b95f-134e-4269-8c81-3a38b6f2a52d] Running
	I1206 11:22:28.297989  660500 system_pods.go:61] "kube-apiserver-pause-362686" [b2ca5685-08cf-4711-bd88-1184aa55260c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 11:22:28.297995  660500 system_pods.go:61] "kube-controller-manager-pause-362686" [ddcd8a5e-c280-40ad-9e1b-875496725a3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 11:22:28.298000  660500 system_pods.go:61] "kube-proxy-gjknk" [a4a94694-360a-43df-8d83-3139b6279e4c] Running
	I1206 11:22:28.298004  660500 system_pods.go:61] "kube-scheduler-pause-362686" [6fa2f3a6-e9b4-4716-b97f-2b4583a0e219] Running
	I1206 11:22:28.298009  660500 system_pods.go:74] duration metric: took 5.01501ms to wait for pod list to return data ...
	I1206 11:22:28.298021  660500 default_sa.go:34] waiting for default service account to be created ...
	I1206 11:22:28.300983  660500 default_sa.go:45] found service account: "default"
	I1206 11:22:28.301054  660500 default_sa.go:55] duration metric: took 3.025575ms for default service account to be created ...
	I1206 11:22:28.301080  660500 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 11:22:28.304711  660500 system_pods.go:86] 7 kube-system pods found
	I1206 11:22:28.304791  660500 system_pods.go:89] "coredns-66bc5c9577-fpnqh" [cee22ce0-0d6a-4f3d-8f27-76f52d094dcb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 11:22:28.304815  660500 system_pods.go:89] "etcd-pause-362686" [8995ba07-f227-41f3-bb7b-dd67ce35d0ee] Running
	I1206 11:22:28.304834  660500 system_pods.go:89] "kindnet-2xclh" [3cf5b95f-134e-4269-8c81-3a38b6f2a52d] Running
	I1206 11:22:28.304870  660500 system_pods.go:89] "kube-apiserver-pause-362686" [b2ca5685-08cf-4711-bd88-1184aa55260c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 11:22:28.304897  660500 system_pods.go:89] "kube-controller-manager-pause-362686" [ddcd8a5e-c280-40ad-9e1b-875496725a3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 11:22:28.304916  660500 system_pods.go:89] "kube-proxy-gjknk" [a4a94694-360a-43df-8d83-3139b6279e4c] Running
	I1206 11:22:28.304951  660500 system_pods.go:89] "kube-scheduler-pause-362686" [6fa2f3a6-e9b4-4716-b97f-2b4583a0e219] Running
	I1206 11:22:28.304976  660500 system_pods.go:126] duration metric: took 3.87579ms to wait for k8s-apps to be running ...
	I1206 11:22:28.305000  660500 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 11:22:28.305087  660500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:22:28.319554  660500 system_svc.go:56] duration metric: took 14.544115ms WaitForService to wait for kubelet
	I1206 11:22:28.319633  660500 kubeadm.go:587] duration metric: took 8.029103826s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 11:22:28.319668  660500 node_conditions.go:102] verifying NodePressure condition ...
	I1206 11:22:28.329651  660500 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1206 11:22:28.329732  660500 node_conditions.go:123] node cpu capacity is 2
	I1206 11:22:28.329760  660500 node_conditions.go:105] duration metric: took 10.073981ms to run NodePressure ...
	I1206 11:22:28.329786  660500 start.go:242] waiting for startup goroutines ...
	I1206 11:22:28.329827  660500 start.go:247] waiting for cluster config update ...
	I1206 11:22:28.329854  660500 start.go:256] writing updated cluster config ...
	I1206 11:22:28.330237  660500 ssh_runner.go:195] Run: rm -f paused
	I1206 11:22:28.333945  660500 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 11:22:28.334583  660500 kapi.go:59] client config for pause-362686: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/client.key", CAFile:"/home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 11:22:28.338495  660500 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fpnqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:29.826372  660445 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:22:30.797906  660445 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:22:31.391559  660445 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:22:31.896678  660445 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:22:31.897697  660445 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:22:31.900243  660445 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:22:31.903660  660445 out.go:252]   - Booting up control plane ...
	I1206 11:22:31.903763  660445 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:22:31.903838  660445 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:22:31.903901  660445 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:22:31.926484  660445 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:22:31.926610  660445 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:22:31.933637  660445 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:22:31.934020  660445 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:22:31.934084  660445 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:22:32.064390  660445 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:22:32.064512  660445 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1206 11:22:30.351924  660500 pod_ready.go:104] pod "coredns-66bc5c9577-fpnqh" is not "Ready", error: <nil>
	I1206 11:22:31.848988  660500 pod_ready.go:94] pod "coredns-66bc5c9577-fpnqh" is "Ready"
	I1206 11:22:31.849014  660500 pod_ready.go:86] duration metric: took 3.510452608s for pod "coredns-66bc5c9577-fpnqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:31.855011  660500 pod_ready.go:83] waiting for pod "etcd-pause-362686" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:31.862771  660500 pod_ready.go:94] pod "etcd-pause-362686" is "Ready"
	I1206 11:22:31.862804  660500 pod_ready.go:86] duration metric: took 7.763724ms for pod "etcd-pause-362686" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:31.867527  660500 pod_ready.go:83] waiting for pod "kube-apiserver-pause-362686" in "kube-system" namespace to be "Ready" or be gone ...
	W1206 11:22:33.874826  660500 pod_ready.go:104] pod "kube-apiserver-pause-362686" is not "Ready", error: <nil>
	W1206 11:22:35.874868  660500 pod_ready.go:104] pod "kube-apiserver-pause-362686" is not "Ready", error: <nil>
	I1206 11:22:37.372167  660500 pod_ready.go:94] pod "kube-apiserver-pause-362686" is "Ready"
	I1206 11:22:37.372197  660500 pod_ready.go:86] duration metric: took 5.504563226s for pod "kube-apiserver-pause-362686" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:37.374239  660500 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-362686" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:37.381251  660500 pod_ready.go:94] pod "kube-controller-manager-pause-362686" is "Ready"
	I1206 11:22:37.381280  660500 pod_ready.go:86] duration metric: took 7.012239ms for pod "kube-controller-manager-pause-362686" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:37.383756  660500 pod_ready.go:83] waiting for pod "kube-proxy-gjknk" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:37.388591  660500 pod_ready.go:94] pod "kube-proxy-gjknk" is "Ready"
	I1206 11:22:37.388614  660500 pod_ready.go:86] duration metric: took 4.837364ms for pod "kube-proxy-gjknk" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:37.392895  660500 pod_ready.go:83] waiting for pod "kube-scheduler-pause-362686" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:37.644944  660500 pod_ready.go:94] pod "kube-scheduler-pause-362686" is "Ready"
	I1206 11:22:37.645027  660500 pod_ready.go:86] duration metric: took 252.101935ms for pod "kube-scheduler-pause-362686" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:22:37.645056  660500 pod_ready.go:40] duration metric: took 9.311037527s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 11:22:37.737023  660500 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1206 11:22:37.747188  660500 out.go:179] * Done! kubectl is now configured to use "pause-362686" cluster and "default" namespace by default
	I1206 11:22:36.083811  660445 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 4.01970701s
	I1206 11:22:36.087386  660445 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 11:22:36.087481  660445 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1206 11:22:36.087566  660445 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 11:22:36.087641  660445 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Dec 06 11:22:19 pause-362686 crio[2103]: time="2025-12-06T11:22:19.772605205Z" level=info msg="Started container" PID=2370 containerID=6e3cfbbf22515804d814e640a38a565801f45c2eb8911d6a2683c88b1e27721f description=kube-system/kube-apiserver-pause-362686/kube-apiserver id=1150ae8c-eff0-4cea-a60d-1812cfa79bec name=/runtime.v1.RuntimeService/StartContainer sandboxID=3ad607eb97380c593fdfc0133145d95eee6ed325f3145d0eed00fa812ac242fd
	Dec 06 11:22:19 pause-362686 crio[2103]: time="2025-12-06T11:22:19.780961001Z" level=info msg="Created container ebe67943652dca7cbe83a46bb2d819cac77f0ad4b644b12ca924ab6584a4bd63: kube-system/coredns-66bc5c9577-fpnqh/coredns" id=a2ff1d3c-770d-4694-af34-0e6a6410ca7d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 11:22:19 pause-362686 crio[2103]: time="2025-12-06T11:22:19.787593924Z" level=info msg="Starting container: ebe67943652dca7cbe83a46bb2d819cac77f0ad4b644b12ca924ab6584a4bd63" id=99186525-0cbb-4c58-a443-5b27e1986a3f name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 11:22:19 pause-362686 crio[2103]: time="2025-12-06T11:22:19.792159154Z" level=info msg="Started container" PID=2367 containerID=ebe67943652dca7cbe83a46bb2d819cac77f0ad4b644b12ca924ab6584a4bd63 description=kube-system/coredns-66bc5c9577-fpnqh/coredns id=99186525-0cbb-4c58-a443-5b27e1986a3f name=/runtime.v1.RuntimeService/StartContainer sandboxID=6d19fdcd8a5d0b367366a49d53d585c0412149bda945344c61f222ef59d977f4
	Dec 06 11:22:19 pause-362686 crio[2103]: time="2025-12-06T11:22:19.828591049Z" level=info msg="Created container 86668cc48342b6b06db4ce1d779a41d7855a18e4c0a057f86e1158cc5f6d5eda: kube-system/kindnet-2xclh/kindnet-cni" id=f214e09c-d4c5-489e-8ddc-4da5855d85da name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 11:22:19 pause-362686 crio[2103]: time="2025-12-06T11:22:19.832884793Z" level=info msg="Starting container: 86668cc48342b6b06db4ce1d779a41d7855a18e4c0a057f86e1158cc5f6d5eda" id=ec22a561-1bf6-43f3-ac85-6217d273f8d9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 11:22:19 pause-362686 crio[2103]: time="2025-12-06T11:22:19.834907334Z" level=info msg="Started container" PID=2383 containerID=86668cc48342b6b06db4ce1d779a41d7855a18e4c0a057f86e1158cc5f6d5eda description=kube-system/kindnet-2xclh/kindnet-cni id=ec22a561-1bf6-43f3-ac85-6217d273f8d9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9a604b6ff7ed248fa04303626569e2a9f043ab091d8700171dbdece4fd47417
	Dec 06 11:22:19 pause-362686 crio[2103]: time="2025-12-06T11:22:19.873604794Z" level=info msg="Created container a26bba6a17e6370c96bd7c1a6acad88e7c45321d94b62ab7fd35fd42563c6135: kube-system/kube-proxy-gjknk/kube-proxy" id=3dbf8187-1609-4cfd-aacc-dbd06196743f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 11:22:19 pause-362686 crio[2103]: time="2025-12-06T11:22:19.874305704Z" level=info msg="Starting container: a26bba6a17e6370c96bd7c1a6acad88e7c45321d94b62ab7fd35fd42563c6135" id=aabe82c1-4c76-49de-a6ac-02dbf03917d9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 11:22:19 pause-362686 crio[2103]: time="2025-12-06T11:22:19.888194752Z" level=info msg="Started container" PID=2347 containerID=a26bba6a17e6370c96bd7c1a6acad88e7c45321d94b62ab7fd35fd42563c6135 description=kube-system/kube-proxy-gjknk/kube-proxy id=aabe82c1-4c76-49de-a6ac-02dbf03917d9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6b7890de5806e5588f9195610c019f283a188e4ba50b5c6aa593845f09c1dc62
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.294142969Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.301867466Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.301905249Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.301931095Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.305230273Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.315154095Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.315266552Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.320749286Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.32091218Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.320981216Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.328840848Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.329015212Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.329098017Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.340386785Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 11:22:30 pause-362686 crio[2103]: time="2025-12-06T11:22:30.340428598Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	86668cc48342b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   b9a604b6ff7ed       kindnet-2xclh                          kube-system
	ebe67943652dc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   25 seconds ago       Running             coredns                   1                   6d19fdcd8a5d0       coredns-66bc5c9577-fpnqh               kube-system
	6e3cfbbf22515       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   25 seconds ago       Running             kube-apiserver            1                   3ad607eb97380       kube-apiserver-pause-362686            kube-system
	a26bba6a17e63       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   25 seconds ago       Running             kube-proxy                1                   6b7890de5806e       kube-proxy-gjknk                       kube-system
	ff260a67303ff       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   25 seconds ago       Running             kube-controller-manager   1                   74638ed0b2609       kube-controller-manager-pause-362686   kube-system
	c453d81cb3c61       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   25 seconds ago       Running             etcd                      1                   2053bfa4e8164       etcd-pause-362686                      kube-system
	0fd9511997553       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   25 seconds ago       Running             kube-scheduler            1                   e029bde32796c       kube-scheduler-pause-362686            kube-system
	6d1b063b72f99       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   38 seconds ago       Exited              coredns                   0                   6d19fdcd8a5d0       coredns-66bc5c9577-fpnqh               kube-system
	941d38d4fe915       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   b9a604b6ff7ed       kindnet-2xclh                          kube-system
	a44e62d267f8f       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   6b7890de5806e       kube-proxy-gjknk                       kube-system
	edea99de7a794       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   74638ed0b2609       kube-controller-manager-pause-362686   kube-system
	a2bd67f169d22       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   2053bfa4e8164       etcd-pause-362686                      kube-system
	e5dcf878f0a2f       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   e029bde32796c       kube-scheduler-pause-362686            kube-system
	a978c34bc129a       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   3ad607eb97380       kube-apiserver-pause-362686            kube-system
	
	
	==> coredns [6d1b063b72f9938ca522120a5fbd763acc547f1a23d25c7fdabad14c548f5751] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36828 - 35279 "HINFO IN 6217268144750739585.1611302712944024103. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006432608s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ebe67943652dca7cbe83a46bb2d819cac77f0ad4b644b12ca924ab6584a4bd63] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44023 - 31970 "HINFO IN 1445320698905015370.8901211037546607024. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014312957s
	
	
	==> describe nodes <==
	Name:               pause-362686
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-362686
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=pause-362686
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T11_21_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 11:21:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-362686
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 11:22:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 11:22:05 +0000   Sat, 06 Dec 2025 11:21:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 11:22:05 +0000   Sat, 06 Dec 2025 11:21:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 11:22:05 +0000   Sat, 06 Dec 2025 11:21:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 11:22:05 +0000   Sat, 06 Dec 2025 11:22:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-362686
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                eac46063-1e56-43a9-8239-8ddd856377e9
	  Boot ID:                    e36fa5c9-4dd5-4964-a1e1-f5022a7b372f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-fpnqh                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     80s
	  kube-system                 etcd-pause-362686                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         89s
	  kube-system                 kindnet-2xclh                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      80s
	  kube-system                 kube-apiserver-pause-362686             250m (12%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-controller-manager-pause-362686    200m (10%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-gjknk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-scheduler-pause-362686             100m (5%)     0 (0%)      0 (0%)           0 (0%)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 78s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Normal   NodeHasSufficientMemory  98s (x8 over 99s)  kubelet          Node pause-362686 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    98s (x8 over 99s)  kubelet          Node pause-362686 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     98s (x8 over 99s)  kubelet          Node pause-362686 status is now: NodeHasSufficientPID
	  Normal   Starting                 86s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 85s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  85s                kubelet          Node pause-362686 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    85s                kubelet          Node pause-362686 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     85s                kubelet          Node pause-362686 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           81s                node-controller  Node pause-362686 event: Registered Node pause-362686 in Controller
	  Normal   NodeReady                39s                kubelet          Node pause-362686 status is now: NodeReady
	  Normal   RegisteredNode           14s                node-controller  Node pause-362686 event: Registered Node pause-362686 in Controller
	
	
	==> dmesg <==
	[  +3.396905] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:59] overlayfs: idmapped layers are currently not supported
	[ +34.069943] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:00] overlayfs: idmapped layers are currently not supported
	[  +3.921462] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:01] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:02] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:03] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:08] overlayfs: idmapped layers are currently not supported
	[ +32.041559] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:11] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:12] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:13] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:14] overlayfs: idmapped layers are currently not supported
	[  +0.520412] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:15] overlayfs: idmapped layers are currently not supported
	[ +26.850323] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:16] overlayfs: idmapped layers are currently not supported
	[ +26.214447] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:17] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:19] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:21] overlayfs: idmapped layers are currently not supported
	[  +0.844232] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:22] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a2bd67f169d223a769428c661c985dc250fa2eb1f1d2f69b7452ba14c1cdaaf4] <==
	{"level":"warn","ts":"2025-12-06T11:21:11.314106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:21:11.371339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:21:11.459942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:21:11.524502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:21:11.559399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:21:11.625395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:21:11.843272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37934","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T11:22:11.388093Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-06T11:22:11.388149Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-362686","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-12-06T11:22:11.388243Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T11:22:11.939759Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T11:22:11.939840Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T11:22:11.939860Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-12-06T11:22:11.939956Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-06T11:22:11.939975Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-06T11:22:11.940199Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T11:22:11.940236Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T11:22:11.940245Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-06T11:22:11.940281Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T11:22:11.940296Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T11:22:11.940304Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T11:22:11.943294Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-12-06T11:22:11.943373Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T11:22:11.943399Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-06T11:22:11.943411Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-362686","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [c453d81cb3c615216b5765eff485bd7cf640ceb31d76bdc3dfd6a126ddd6e142] <==
	{"level":"warn","ts":"2025-12-06T11:22:24.224177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.239332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.273364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.289392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.324773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.339659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.355985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.374578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.392737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.439739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.467640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.508101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.526654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.559442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.597778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.617069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.637593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.654391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.707616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.743479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.785918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.830720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.872611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:24.905950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:22:25.046314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46506","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:22:45 up  4:05,  0 user,  load average: 4.51, 2.81, 2.25
	Linux pause-362686 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [86668cc48342b6b06db4ce1d779a41d7855a18e4c0a057f86e1158cc5f6d5eda] <==
	I1206 11:22:20.020071       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 11:22:20.020323       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1206 11:22:20.020810       1 main.go:148] setting mtu 1500 for CNI 
	I1206 11:22:20.020826       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 11:22:20.020842       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T11:22:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 11:22:20.293346       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 11:22:20.293411       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 11:22:20.293421       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 11:22:20.294138       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 11:22:27.001362       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 11:22:27.001398       1 metrics.go:72] Registering metrics
	I1206 11:22:27.001477       1 controller.go:711] "Syncing nftables rules"
	I1206 11:22:30.293655       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 11:22:30.293826       1 main.go:301] handling current node
	I1206 11:22:40.295668       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 11:22:40.295729       1 main.go:301] handling current node
	
	
	==> kindnet [941d38d4fe915ca06d5a8cc2dd6e1239af193b6889d323f017ed16e115e81d35] <==
	I1206 11:21:25.437908       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 11:21:25.444880       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1206 11:21:25.445054       1 main.go:148] setting mtu 1500 for CNI 
	I1206 11:21:25.445081       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 11:21:25.445094       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T11:21:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 11:21:25.667333       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 11:21:25.667429       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 11:21:25.667464       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 11:21:25.668237       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1206 11:21:55.668419       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1206 11:21:55.668418       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1206 11:21:55.668550       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1206 11:21:55.668649       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1206 11:21:57.367697       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 11:21:57.367731       1 metrics.go:72] Registering metrics
	I1206 11:21:57.367806       1 controller.go:711] "Syncing nftables rules"
	I1206 11:22:05.671225       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 11:22:05.671281       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6e3cfbbf22515804d814e640a38a565801f45c2eb8911d6a2683c88b1e27721f] <==
	I1206 11:22:26.819237       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1206 11:22:26.819329       1 policy_source.go:240] refreshing policies
	I1206 11:22:26.827197       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1206 11:22:26.828088       1 aggregator.go:171] initial CRD sync complete...
	I1206 11:22:26.828147       1 autoregister_controller.go:144] Starting autoregister controller
	I1206 11:22:26.828181       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 11:22:26.834063       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 11:22:26.841822       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1206 11:22:26.849259       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 11:22:26.849419       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 11:22:26.860728       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 11:22:26.867214       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 11:22:26.867416       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 11:22:26.867506       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1206 11:22:26.861136       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 11:22:26.895810       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 11:22:26.909952       1 cache.go:39] Caches are synced for LocalAvailability controller
	E1206 11:22:26.924935       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 11:22:26.932396       1 cache.go:39] Caches are synced for autoregister controller
	I1206 11:22:27.304007       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 11:22:28.819682       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 11:22:30.318657       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 11:22:30.342110       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 11:22:30.381438       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 11:22:30.533491       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [a978c34bc129a8093c51a8672e8d1d3c8a66e7e93bc4096a4ed9b46a5133bf24] <==
	W1206 11:22:11.425067       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.425106       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.425190       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.425254       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.426369       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.426494       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.426583       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.426666       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.429719       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.429788       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.429832       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.429870       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.429898       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.429928       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.429956       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.429979       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.430006       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.431235       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.431265       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.431290       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.431313       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.431337       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.433082       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.433206       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:22:11.433292       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [edea99de7a79435a14ae5bb6a539e81bf5c38079dc33137b11444b62b1de8815] <==
	I1206 11:21:23.267912       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1206 11:21:23.268011       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1206 11:21:23.268382       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 11:21:23.268928       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1206 11:21:23.269000       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1206 11:21:23.270063       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 11:21:23.270146       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 11:21:23.270298       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 11:21:23.271412       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1206 11:21:23.271493       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1206 11:21:23.278955       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 11:21:23.283586       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1206 11:21:23.287866       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 11:21:23.290324       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 11:21:23.290443       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 11:21:23.292305       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 11:21:23.292325       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 11:21:23.292183       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 11:21:23.292982       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 11:21:23.315747       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1206 11:21:23.315943       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1206 11:21:23.316062       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-362686"
	I1206 11:21:23.316148       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1206 11:21:23.338974       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 11:22:08.322006       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [ff260a67303ff3f7a8aa0797d085aa7948f99f8e8c90b67ee4407f01cd45e323] <==
	I1206 11:22:30.274165       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 11:22:30.275250       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 11:22:30.275326       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 11:22:30.278702       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 11:22:30.279476       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1206 11:22:30.279560       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 11:22:30.279601       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1206 11:22:30.282913       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 11:22:30.285589       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1206 11:22:30.288008       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1206 11:22:30.288167       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1206 11:22:30.288271       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-362686"
	I1206 11:22:30.288365       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1206 11:22:30.288429       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1206 11:22:30.296545       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1206 11:22:30.296684       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1206 11:22:30.296813       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 11:22:30.297448       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 11:22:30.302800       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 11:22:30.305034       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 11:22:30.302814       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1206 11:22:30.307885       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 11:22:30.324011       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1206 11:22:30.327251       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1206 11:22:30.327685       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [a26bba6a17e6370c96bd7c1a6acad88e7c45321d94b62ab7fd35fd42563c6135] <==
	I1206 11:22:23.953831       1 server_linux.go:53] "Using iptables proxy"
	I1206 11:22:24.896353       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 11:22:27.131338       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 11:22:27.153434       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1206 11:22:27.173534       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 11:22:27.859252       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 11:22:27.859408       1 server_linux.go:132] "Using iptables Proxier"
	I1206 11:22:27.884355       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 11:22:27.884762       1 server.go:527] "Version info" version="v1.34.2"
	I1206 11:22:27.884968       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 11:22:27.886246       1 config.go:200] "Starting service config controller"
	I1206 11:22:27.886370       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 11:22:27.886417       1 config.go:106] "Starting endpoint slice config controller"
	I1206 11:22:27.886445       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 11:22:27.886482       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 11:22:27.886509       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 11:22:27.887337       1 config.go:309] "Starting node config controller"
	I1206 11:22:27.887390       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 11:22:27.887418       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 11:22:27.987425       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 11:22:27.987524       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 11:22:27.987538       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [a44e62d267f8fee2c6800bbd3ace8990c75f30bbc3bb324584f31501e6d0b0e0] <==
	I1206 11:21:25.509497       1 server_linux.go:53] "Using iptables proxy"
	I1206 11:21:25.630085       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 11:21:25.750045       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 11:21:25.782763       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1206 11:21:25.795970       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 11:21:25.902462       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 11:21:25.902586       1 server_linux.go:132] "Using iptables Proxier"
	I1206 11:21:25.910046       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 11:21:25.910433       1 server.go:527] "Version info" version="v1.34.2"
	I1206 11:21:25.910671       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 11:21:25.912281       1 config.go:200] "Starting service config controller"
	I1206 11:21:25.912352       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 11:21:25.912394       1 config.go:106] "Starting endpoint slice config controller"
	I1206 11:21:25.912435       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 11:21:25.912495       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 11:21:25.912524       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 11:21:25.913284       1 config.go:309] "Starting node config controller"
	I1206 11:21:25.916065       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 11:21:25.916178       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 11:21:26.012879       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 11:21:26.012930       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 11:21:26.013011       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0fd951199755384d101f360f2a37416ef2791debea5e34742392446869de4356] <==
	I1206 11:22:24.142802       1 serving.go:386] Generated self-signed cert in-memory
	I1206 11:22:27.200010       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1206 11:22:27.211446       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 11:22:27.222858       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 11:22:27.222939       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1206 11:22:27.222959       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1206 11:22:27.222995       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 11:22:27.225406       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 11:22:27.225421       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 11:22:27.225455       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 11:22:27.225461       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 11:22:27.324504       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1206 11:22:27.325683       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 11:22:27.325783       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [e5dcf878f0a2fc09413f380ae032038a9f6a343f47a1c3939bf59537afe75948] <==
	E1206 11:21:16.924138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 11:21:16.924252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 11:21:16.924287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 11:21:16.924320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 11:21:16.924360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 11:21:16.924443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 11:21:16.924486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 11:21:16.924525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 11:21:16.941065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1206 11:21:16.942563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 11:21:16.942647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 11:21:16.942654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 11:21:16.942731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 11:21:16.942786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 11:21:16.942859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 11:21:16.942966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 11:21:16.943081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 11:21:16.943817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1206 11:21:18.023911       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 11:22:11.385200       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1206 11:22:11.385311       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1206 11:22:11.385323       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1206 11:22:11.385346       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 11:22:11.385592       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1206 11:22:11.385616       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 06 11:22:19 pause-362686 kubelet[1308]: E1206 11:22:19.521693    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-362686\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e379fa1648ef64a3c1d72bbf64384195" pod="kube-system/kube-scheduler-pause-362686"
	Dec 06 11:22:19 pause-362686 kubelet[1308]: E1206 11:22:19.522039    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-362686\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="3685b560b5fbcc25b6156ee63b418cd7" pod="kube-system/etcd-pause-362686"
	Dec 06 11:22:19 pause-362686 kubelet[1308]: I1206 11:22:19.526144    1308 scope.go:117] "RemoveContainer" containerID="6d1b063b72f9938ca522120a5fbd763acc547f1a23d25c7fdabad14c548f5751"
	Dec 06 11:22:19 pause-362686 kubelet[1308]: E1206 11:22:19.526820    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-362686\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e5f9392d751fb88f96ab0b53f361fb38" pod="kube-system/kube-controller-manager-pause-362686"
	Dec 06 11:22:19 pause-362686 kubelet[1308]: E1206 11:22:19.527069    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjknk\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a4a94694-360a-43df-8d83-3139b6279e4c" pod="kube-system/kube-proxy-gjknk"
	Dec 06 11:22:19 pause-362686 kubelet[1308]: E1206 11:22:19.527714    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-2xclh\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="3cf5b95f-134e-4269-8c81-3a38b6f2a52d" pod="kube-system/kindnet-2xclh"
	Dec 06 11:22:19 pause-362686 kubelet[1308]: E1206 11:22:19.528007    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-fpnqh\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="cee22ce0-0d6a-4f3d-8f27-76f52d094dcb" pod="kube-system/coredns-66bc5c9577-fpnqh"
	Dec 06 11:22:19 pause-362686 kubelet[1308]: E1206 11:22:19.528330    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-362686\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e379fa1648ef64a3c1d72bbf64384195" pod="kube-system/kube-scheduler-pause-362686"
	Dec 06 11:22:19 pause-362686 kubelet[1308]: E1206 11:22:19.528634    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-362686\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="3685b560b5fbcc25b6156ee63b418cd7" pod="kube-system/etcd-pause-362686"
	Dec 06 11:22:19 pause-362686 kubelet[1308]: E1206 11:22:19.528933    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-362686\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a4d04ad68b81ee4f3b5ceee13f7759de" pod="kube-system/kube-apiserver-pause-362686"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.544275    1308 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-362686\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.544811    1308 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-362686\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.547389    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-362686\" is forbidden: User \"system:node:pause-362686\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" podUID="e379fa1648ef64a3c1d72bbf64384195" pod="kube-system/kube-scheduler-pause-362686"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.548379    1308 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-362686\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.726102    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-362686\" is forbidden: User \"system:node:pause-362686\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" podUID="3685b560b5fbcc25b6156ee63b418cd7" pod="kube-system/etcd-pause-362686"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.741971    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-362686\" is forbidden: User \"system:node:pause-362686\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" podUID="a4d04ad68b81ee4f3b5ceee13f7759de" pod="kube-system/kube-apiserver-pause-362686"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.756293    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-362686\" is forbidden: User \"system:node:pause-362686\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" podUID="e5f9392d751fb88f96ab0b53f361fb38" pod="kube-system/kube-controller-manager-pause-362686"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.758147    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-gjknk\" is forbidden: User \"system:node:pause-362686\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" podUID="a4a94694-360a-43df-8d83-3139b6279e4c" pod="kube-system/kube-proxy-gjknk"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.760716    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-2xclh\" is forbidden: User \"system:node:pause-362686\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" podUID="3cf5b95f-134e-4269-8c81-3a38b6f2a52d" pod="kube-system/kindnet-2xclh"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.770854    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-fpnqh\" is forbidden: User \"system:node:pause-362686\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" podUID="cee22ce0-0d6a-4f3d-8f27-76f52d094dcb" pod="kube-system/coredns-66bc5c9577-fpnqh"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.780099    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-362686\" is forbidden: User \"system:node:pause-362686\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" podUID="e379fa1648ef64a3c1d72bbf64384195" pod="kube-system/kube-scheduler-pause-362686"
	Dec 06 11:22:26 pause-362686 kubelet[1308]: E1206 11:22:26.827688    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-362686\" is forbidden: User \"system:node:pause-362686\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-362686' and this object" podUID="3685b560b5fbcc25b6156ee63b418cd7" pod="kube-system/etcd-pause-362686"
	Dec 06 11:22:38 pause-362686 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 11:22:38 pause-362686 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 11:22:38 pause-362686 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-362686 -n pause-362686
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-362686 -n pause-362686: exit status 2 (558.703896ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-362686 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (8.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7200.081s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:01:50.695113  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/kindnet-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:01:58.999908  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/custom-flannel-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:02:30.952921  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/enable-default-cni-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:03:15.605183  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:03:22.063257  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/custom-flannel-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:03:33.779335  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/auto-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:03:35.937559  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:03:38.012927  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/flannel-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:03:54.016888  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/enable-default-cni-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:04:29.148407  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/bridge-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:04:56.333544  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:05:01.077269  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/flannel-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:05:12.750579  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/calico-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:05:13.254744  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:05:25.052689  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/old-k8s-version-860737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:05:27.629877  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/kindnet-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:05:52.216393  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/bridge-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:06:10.847061  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/default-k8s-diff-port-466867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:06:48.119802  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/old-k8s-version-860737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:06:58.999417  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/custom-flannel-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:07:30.952360  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/enable-default-cni-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:08:15.605059  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:08:33.779275  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/auto-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:08:35.936978  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:08:38.013211  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/flannel-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:09:29.148040  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/bridge-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:10:12.751342  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/calico-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:10:13.255383  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:10:25.052707  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/old-k8s-version-860737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1206 12:10:27.630263  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/kindnet-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestStartStop (33m35s)
		TestStartStop/group/no-preload (25m39s)
		TestStartStop/group/no-preload/serial (25m39s)
		TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8m54s)

                                                
                                                
goroutine 6131 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x40004a2700, 0x40007c9bb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x40005ec288, {0x534c580, 0x2c, 0x2c}, {0x40007c9d08?, 0x125774?, 0x5374f80?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x40006eed20)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x40006eed20)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 3810 [chan receive, 31 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40016c2cc0, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3792
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3814 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x40003a8000}, 0x4001610740, 0x4001610788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x40003a8000}, 0xe8?, 0x4001610740, 0x4001610788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x40003a8000?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40016f4a80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3810
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4985 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4984
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4169 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x40005efe90, 0x16)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40005efe80)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40004fd2c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400142c770?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x40003a8000?}, 0x40014646a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x40003a8000}, 0x400150bf38, {0x369d700, 0x40019807b0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40014647a8?, {0x369d700?, 0x40019807b0?}, 0x20?, 0x40019bf800?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001994410, 0x3b9aca00, 0x0, 0x1, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4166
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3701 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x40003a8000}, 0x4001610f40, 0x400150af88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x40003a8000}, 0x56?, 0x4001610f40, 0x4001610f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x40003a8000?}, 0x0?, 0x4001610f50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f34b0?, 0x40001bc080?, 0x40005ce900?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3683
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4267 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40014f33d0, 0x16)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40014f33c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001be8060)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40003a9c70?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x40003a8000?}, 0x40000a56a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x40003a8000}, 0x4001435f38, {0x369d700, 0x40013c1ad0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40000a57a8?, {0x369d700?, 0x40013c1ad0?}, 0x9c?, 0x40017f7500?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001995a50, 0x3b9aca00, 0x0, 0x1, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4223
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1216 [IO wait, 109 minutes]:
internal/poll.runtime_pollWait(0xffff60554e00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001ba0280?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x4001ba0280)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x4001ba0280)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x4001c85980)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x4001c85980)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x40002daf00, {0x36d31a0, 0x4001c85980})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x40002daf00)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1214
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 851 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x40003a8000}, 0x40013ab740, 0x4001437f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x40003a8000}, 0x48?, 0x40013ab740, 0x40013ab788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x40003a8000?}, 0x400187f680?, 0x4001831a40?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40005cfc80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 815
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3682 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x40001bc080?}, 0x40005ce900?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3664
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4053 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x4001c81150, 0x16)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001c81140)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001be9ce0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400142c230?, 0x11e6d8?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x40003a8000?}, 0x369c080?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x40003a8000}, 0x400150df38, {0x369d700, 0x400135b110}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369d700?, 0x400135b110?}, 0x90?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001a28fa0, 0x3b9aca00, 0x0, 0x1, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4050
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 815 [chan receive, 111 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40013c7980, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 841
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 144 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x40003a8000}, 0x4001465f40, 0x4001436f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x40003a8000}, 0x58?, 0x4001465f40, 0x4001465f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x40003a8000?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40005ce780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 154
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 143 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4000747c50, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000747c40)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40014fb440)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40003a8af0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x40003a8000?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x40003a8000}, 0x4001385f38, {0x369d700, 0x40013067b0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x369d700?, 0x40013067b0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400130c380, 0x3b9aca00, 0x0, 0x1, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 154
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 161 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 144
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 154 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40014fb440, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 146
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 153 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x40001bc080?}, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 146
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1159 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x4001d57680, 0x4001c78d20)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 712
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3808 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40016a00d0, 0x17)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40016a00c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001d71440)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4000083180?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x40003a8000?}, 0x40016146a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x40003a8000}, 0x40000d7f38, {0x369d700, 0x4001cea030}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40016147a8?, {0x369d700?, 0x4001cea030?}, 0x60?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001b244a0, 0x3b9aca00, 0x0, 0x1, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3837
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4268 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x40003a8000}, 0x40013ab740, 0x40013ab788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x40003a8000}, 0x88?, 0x40013ab740, 0x40013ab788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x40003a8000?}, 0x400022d3b0?, 0x4001ce2460?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x400145a4d8?, 0x4001995a20?, 0x4001977690?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4223
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4170 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x40003a8000}, 0x4001466f40, 0x4001466f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x40003a8000}, 0xf8?, 0x4001466f40, 0x4001466f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x40003a8000?}, 0x4001a12900?, 0x4000239b80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40001d0c00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4166
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 814 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x40001bc080?}, 0x400159da40?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 841
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 852 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 851
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4500 [chan receive, 26 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001be9860, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4495
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4055 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4054
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3671 [chan receive, 26 minutes]:
testing.(*T).Run(0x40016f5c00, {0x296e9b1?, 0x0?}, 0x4001a33680)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x40016f5c00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x40016f5c00, 0x4001a4a400)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3667
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1632 [chan receive, 82 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001be8360, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1630
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1060 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0x4001c5e000, 0x4001b63dc0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1059
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3836 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x40001bc080?}, 0x400145b880?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3835
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 588 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0xffff60555600, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400046b280?, 0x2d970?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x400046b280)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x400046b280)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40014f20c0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40014f20c0)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x40002da900, {0x36d31a0, 0x40014f20c0})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x40002da900)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 586
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 4771 [chan receive, 20 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40013c7320, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4750
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1985 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x40001d1680, 0x4001bf5030)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1984
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3813 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4001992610, 0x17)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001992600)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40016c2cc0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40002b1b20?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x40003a8000?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x40003a8000}, 0x400150ef38, {0x369d700, 0x40013c0f30}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f34b0?, {0x369d700?, 0x40013c0f30?}, 0xd0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40016fbc10, 0x3b9aca00, 0x0, 0x1, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3810
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1180 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0x4001622360)
	/usr/local/go/src/net/http/transport.go:2398 +0xa6c
created by net/http.(*Transport).dialConn in goroutine 1178
	/usr/local/go/src/net/http/transport.go:1947 +0x111c

                                                
                                                
goroutine 1609 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1608
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4171 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4170
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4049 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x40001bc080?}, 0x40014dc000?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4045
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1181 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0x4001622360)
	/usr/local/go/src/net/http/transport.go:2600 +0x94
created by net/http.(*Transport).dialConn in goroutine 1178
	/usr/local/go/src/net/http/transport.go:1948 +0x1164

                                                
                                                
goroutine 4399 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x40003a8000}, 0x40013aaf40, 0x40013aaf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x40003a8000}, 0x10?, 0x40013aaf40, 0x40013aaf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x40003a8000?}, 0x400146ec00?, 0x4001a3ef00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400145a8c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4395
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4166 [chan receive, 29 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40004fd2c0, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4161
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 2034 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x40005cf500, 0x4000082af0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1404
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1947 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x40001d0780, 0x4001bf43f0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1946
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3683 [chan receive, 33 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40019a3080, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3664
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4979 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x40001bc080?}, 0x40005ce780?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4978
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4394 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x40001bc080?}, 0x4001c4a000?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4380
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4983 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x40016a04d0, 0x1)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40016a04c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40014fae40)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400022e070?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x40003a8000?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x40003a8000}, 0x400136ef38, {0x369d700, 0x4001762120}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f34b0?, {0x369d700?, 0x4001762120?}, 0x70?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40019951d0, 0x3b9aca00, 0x0, 0x1, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4980
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4223 [chan receive, 28 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001be8060, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4221
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 850 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40005efc50, 0x2c)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40005efc40)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40013c7980)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001573730?, 0x796c6e4f74736f48?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x40003a8000?}, 0x64416c616e726574?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x40003a8000}, 0x40012f5f38, {0x369d700, 0x4001563c50}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x5520736572616873?, {0x369d700?, 0x4001563c50?}, 0x66?, 0x663a79786f725053?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001710c60, 0x3b9aca00, 0x0, 0x1, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 815
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3700 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4001a4ab90, 0x17)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001a4ab80)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40019a3080)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001572850?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x40003a8000?}, 0x40013a7ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x40003a8000}, 0x4001370f38, {0x369d700, 0x40015de510}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369d700?, 0x40015de510?}, 0x20?, 0x36e57f8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40013ca4e0, 0x3b9aca00, 0x0, 0x1, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3683
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1117 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x4001d57500, 0x4001c79a40)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1116
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4480 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40014f3650, 0x16)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40014f3640)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001be9860)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001b621c0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x40003a8000?}, 0x40013a8ef8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x40003a8000}, 0x400150cf38, {0x369d700, 0x40015df290}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x369d700?, 0x40015df290?}, 0xe0?, 0x40017ec780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400130c200, 0x3b9aca00, 0x0, 0x1, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4500
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3809 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x40001bc080?}, 0x400159d180?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3792
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4514 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4513
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4054 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x40003a8000}, 0x40013a4740, 0x40013a4788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x40003a8000}, 0x5?, 0x40013a4740, 0x40013a4788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x40003a8000?}, 0x0?, 0x40013a4750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f34b0?, 0x40001bc080?, 0x40014dc000?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4050
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3192 [chan receive, 33 minutes]:
testing.(*T).Run(0x40016f48c0, {0x296d53f?, 0x4001387f58?}, 0x339b6f8)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop(0x40016f48c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x40016f48c0, 0x339b510)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3841 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x40003a8000}, 0x40013a9f40, 0x40013a9f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x40003a8000}, 0x28?, 0x40013a9f40, 0x40013a9f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x40003a8000?}, 0x40001d0f00?, 0x40002ea280?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x400145a158?, 0x40017103c0?, 0x4000420fc8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3837
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4978 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36e57f8, 0x400022e380}, {0x36d3800, 0x4001ce7c80}, 0x1, 0x0, 0x400140fbe0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/loop.go:66 +0x158
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36e57f8?, 0x4000467c70?}, 0x3b9aca00, 0x400140fe08?, 0x1, 0x400140fbe0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:48 +0x8c
k8s.io/minikube/test/integration.PodWait({0x36e57f8, 0x4000467c70}, 0x400145a1c0, {0x400042b4d0, 0x11}, {0x2993fae, 0x14}, {0x29abe5c, 0x1c}, 0x7dba821800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:379 +0x22c
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36e57f8, 0x4000467c70}, 0x400145a1c0, {0x400042b4d0, 0x11}, {0x2978519?, 0x11dc031e00161e84?}, {0x69341b2d?, 0x400143af58?}, {0x161f08?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:272 +0xf8
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x400145a1c0?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x400145a1c0, 0x4001804380)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4490
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4165 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x40001bc080?}, 0x4001a99c00?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4161
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4765 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x40003a8000}, 0x4001611740, 0x4001382f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x40003a8000}, 0x8?, 0x4001611740, 0x4001611788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x40003a8000?}, 0x4001c4b200?, 0x40002383c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001c4a600?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4771
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3702 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3701
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4499 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x40001bc080?}, 0x40006d4280?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4495
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4764 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0x4001c80e50, 0x13)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001c80e40)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40013c7320)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40002e41c0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x40003a8000?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x40003a8000}, 0x40000d6f38, {0x369d700, 0x4001a50600}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f34b0?, {0x369d700?, 0x4001a50600?}, 0x80?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001710b00, 0x3b9aca00, 0x0, 0x1, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4771
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4269 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4268
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1631 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x40001bc080?}, 0x40006e0a80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1630
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3815 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3814
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3837 [chan receive, 31 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001d71440, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3835
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3842 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3841
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4984 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x40003a8000}, 0x40013a8740, 0x40013a8788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x40003a8000}, 0x80?, 0x40013a8740, 0x40013a8788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x40003a8000?}, 0x0?, 0x95c64?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x40005ffd40?, 0x95c64?, 0x4001c4a600?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4980
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4222 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x40001bc080?}, 0x400145b880?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4221
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1607 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x4001c85250, 0x24)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001c85240)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001be8360)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001762b40?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x40003a8000?}, 0x40014646a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x40003a8000}, 0x4001439f38, {0x369d700, 0x40013ad710}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x369d700?, 0x40013ad710?}, 0xa0?, 0x40006e37c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40013cb2e0, 0x3b9aca00, 0x0, 0x1, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1632
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4770 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x40001bc080?}, 0x40005cfb00?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4750
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4490 [chan receive, 8 minutes]:
testing.(*T).Run(0x4001a996c0, {0x2999fbb?, 0x40000006ee?}, 0x4001804380)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x4001a996c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x4001a996c0, 0x4001a33680)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3671
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4398 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4001992510, 0x16)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001992500)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40018653e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001981350?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x40003a8000?}, 0x40013a86a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x40003a8000}, 0x4001438f38, {0x369d700, 0x400153c180}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40013a87a8?, {0x369d700?, 0x400153c180?}, 0xe0?, 0x4001c4b980?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001cd4cf0, 0x3b9aca00, 0x0, 0x1, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4395
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1608 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x40003a8000}, 0x4001e46740, 0x400136ff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x40003a8000}, 0x78?, 0x4001e46740, 0x4001e46788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x40003a8000?}, 0x746f6e6e61632022?, 0x6572207473696c20?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40006e0000?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1632
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3667 [chan receive]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x40016f4c40, 0x339b6f8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3192
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4400 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4399
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4050 [chan receive, 29 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001be9ce0, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4045
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4513 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x40003a8000}, 0x400009e740, 0x400009e788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x40003a8000}, 0x78?, 0x400009e740, 0x400009e788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x40003a8000?}, 0x161f90?, 0x400145bc00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001800000?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4500
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4395 [chan receive, 26 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40018653e0, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4380
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4766 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4765
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4980 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40014fae40, 0x40003a8000)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4978
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                    

Test pass (286/364)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.57
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 5.02
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.09
18 TestDownloadOnly/v1.34.2/DeleteAll 0.21
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 4.31
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.19
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.28
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.62
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 152.86
40 TestAddons/serial/GCPAuth/Namespaces 0.19
41 TestAddons/serial/GCPAuth/FakeCredentials 9.96
57 TestAddons/StoppedEnableDisable 12.43
58 TestCertOptions 37.17
59 TestCertExpiration 240.17
61 TestForceSystemdFlag 42.03
62 TestForceSystemdEnv 42.47
67 TestErrorSpam/setup 32.67
68 TestErrorSpam/start 0.82
69 TestErrorSpam/status 1.15
70 TestErrorSpam/pause 6.63
71 TestErrorSpam/unpause 5.76
72 TestErrorSpam/stop 1.53
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 47.11
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 30.05
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.1
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.5
84 TestFunctional/serial/CacheCmd/cache/add_local 1.24
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.86
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 37.42
93 TestFunctional/serial/ComponentHealth 0.14
94 TestFunctional/serial/LogsCmd 1.51
95 TestFunctional/serial/LogsFileCmd 1.61
96 TestFunctional/serial/InvalidService 3.92
98 TestFunctional/parallel/ConfigCmd 0.51
99 TestFunctional/parallel/DashboardCmd 7.27
100 TestFunctional/parallel/DryRun 0.52
101 TestFunctional/parallel/InternationalLanguage 0.26
102 TestFunctional/parallel/StatusCmd 1.07
106 TestFunctional/parallel/ServiceCmdConnect 7.68
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 25.45
110 TestFunctional/parallel/SSHCmd 0.56
111 TestFunctional/parallel/CpCmd 2.12
113 TestFunctional/parallel/FileSync 0.35
114 TestFunctional/parallel/CertSync 2.2
118 TestFunctional/parallel/NodeLabels 0.1
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.67
122 TestFunctional/parallel/License 0.41
123 TestFunctional/parallel/Version/short 0.09
124 TestFunctional/parallel/Version/components 0.95
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.7
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
129 TestFunctional/parallel/ImageCommands/ImageBuild 6.27
130 TestFunctional/parallel/ImageCommands/Setup 0.68
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.28
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.59
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.01
136 TestFunctional/parallel/ServiceCmd/DeployApp 7.27
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.18
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.65
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.45
143 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.32
147 TestFunctional/parallel/ServiceCmd/List 0.36
148 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
149 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
150 TestFunctional/parallel/ServiceCmd/Format 0.38
151 TestFunctional/parallel/ServiceCmd/URL 0.38
152 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
153 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
157 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
158 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
159 TestFunctional/parallel/ProfileCmd/profile_list 0.43
160 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
161 TestFunctional/parallel/MountCmd/any-port 7.32
162 TestFunctional/parallel/MountCmd/specific-port 2.11
163 TestFunctional/parallel/MountCmd/VerifyCleanup 2.3
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.47
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.09
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.32
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.91
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.96
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.99
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.51
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.44
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.19
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.15
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.71
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.08
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.37
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 2.16
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.72
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.35
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.52
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.24
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.24
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.25
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.92
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.27
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.53
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.99
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.33
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.15
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.52
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.71
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.95
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.53
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.41
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.38
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.39
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.96
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 2.25
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.01
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 204.86
265 TestMultiControlPlane/serial/DeployApp 9.19
266 TestMultiControlPlane/serial/PingHostFromPods 1.54
267 TestMultiControlPlane/serial/AddWorkerNode 59.21
268 TestMultiControlPlane/serial/NodeLabels 0.1
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.05
270 TestMultiControlPlane/serial/CopyFile 20.68
271 TestMultiControlPlane/serial/StopSecondaryNode 3.16
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.79
273 TestMultiControlPlane/serial/RestartSecondaryNode 31.18
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.41
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 117.41
276 TestMultiControlPlane/serial/DeleteSecondaryNode 11.57
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.82
278 TestMultiControlPlane/serial/StopCluster 36.28
279 TestMultiControlPlane/serial/RestartCluster 62.91
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
281 TestMultiControlPlane/serial/AddSecondaryNode 83.35
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.11
287 TestJSONOutput/start/Command 75.52
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.82
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.26
312 TestKicCustomNetwork/create_custom_network 36.75
313 TestKicCustomNetwork/use_default_bridge_network 35.45
314 TestKicExistingNetwork 33.26
315 TestKicCustomSubnet 38.56
316 TestKicStaticIP 36.69
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 69.65
321 TestMountStart/serial/StartWithMountFirst 9.07
322 TestMountStart/serial/VerifyMountFirst 0.3
323 TestMountStart/serial/StartWithMountSecond 8.89
324 TestMountStart/serial/VerifyMountSecond 0.28
325 TestMountStart/serial/DeleteFirst 1.74
326 TestMountStart/serial/VerifyMountPostDelete 0.27
327 TestMountStart/serial/Stop 1.3
328 TestMountStart/serial/RestartStopped 7.99
329 TestMountStart/serial/VerifyMountPostStop 0.29
332 TestMultiNode/serial/FreshStart2Nodes 139.82
333 TestMultiNode/serial/DeployApp2Nodes 5.24
334 TestMultiNode/serial/PingHostFrom2Pods 0.93
335 TestMultiNode/serial/AddNode 56.76
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.75
338 TestMultiNode/serial/CopyFile 11.04
339 TestMultiNode/serial/StopNode 2.43
340 TestMultiNode/serial/StartAfterStop 8.27
341 TestMultiNode/serial/RestartKeepsNodes 79.34
342 TestMultiNode/serial/DeleteNode 5.76
343 TestMultiNode/serial/StopMultiNode 24.05
344 TestMultiNode/serial/RestartMultiNode 49.16
345 TestMultiNode/serial/ValidateNameConflict 35.63
350 TestPreload 120.31
352 TestScheduledStopUnix 112.37
355 TestInsufficientStorage 13.26
356 TestRunningBinaryUpgrade 304.53
359 TestMissingContainerUpgrade 115.78
361 TestPause/serial/Start 89.34
363 TestNoKubernetes/serial/StartNoK8sWithVersion 0.13
364 TestNoKubernetes/serial/StartWithK8s 42.99
365 TestNoKubernetes/serial/StartWithStopK8s 13.71
366 TestNoKubernetes/serial/Start 8.36
367 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
368 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
369 TestNoKubernetes/serial/ProfileList 1.16
370 TestNoKubernetes/serial/Stop 1.32
371 TestNoKubernetes/serial/StartNoArgs 7.13
372 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
380 TestNetworkPlugins/group/false 3.8
384 TestPause/serial/SecondStartNoReconfiguration 28.19
386 TestStoppedBinaryUpgrade/Setup 1.05
387 TestStoppedBinaryUpgrade/Upgrade 303.15
395 TestNetworkPlugins/group/auto/Start 84.84
396 TestNetworkPlugins/group/auto/KubeletFlags 0.33
397 TestNetworkPlugins/group/auto/NetCatPod 11.38
398 TestNetworkPlugins/group/auto/DNS 0.48
399 TestNetworkPlugins/group/auto/Localhost 0.14
400 TestNetworkPlugins/group/auto/HairPin 0.15
401 TestStoppedBinaryUpgrade/MinikubeLogs 2.47
402 TestNetworkPlugins/group/kindnet/Start 85.85
403 TestNetworkPlugins/group/calico/Start 61.5
404 TestNetworkPlugins/group/calico/ControllerPod 6.01
405 TestNetworkPlugins/group/calico/KubeletFlags 0.36
406 TestNetworkPlugins/group/calico/NetCatPod 10.27
407 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
408 TestNetworkPlugins/group/calico/DNS 0.17
409 TestNetworkPlugins/group/calico/Localhost 0.14
410 TestNetworkPlugins/group/calico/HairPin 0.16
411 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
412 TestNetworkPlugins/group/kindnet/NetCatPod 12.3
413 TestNetworkPlugins/group/kindnet/DNS 0.18
414 TestNetworkPlugins/group/kindnet/Localhost 0.21
415 TestNetworkPlugins/group/kindnet/HairPin 0.18
416 TestNetworkPlugins/group/custom-flannel/Start 64.27
417 TestNetworkPlugins/group/enable-default-cni/Start 77.94
418 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
419 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.27
420 TestNetworkPlugins/group/custom-flannel/DNS 0.16
421 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
422 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
423 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
424 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.28
425 TestNetworkPlugins/group/flannel/Start 65.28
426 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
427 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
428 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
429 TestNetworkPlugins/group/bridge/Start 80.19
430 TestNetworkPlugins/group/flannel/ControllerPod 6.01
431 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
432 TestNetworkPlugins/group/flannel/NetCatPod 12.29
433 TestNetworkPlugins/group/flannel/DNS 0.17
434 TestNetworkPlugins/group/flannel/Localhost 0.16
435 TestNetworkPlugins/group/flannel/HairPin 0.17
438 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
439 TestNetworkPlugins/group/bridge/NetCatPod 11.3
440 TestNetworkPlugins/group/bridge/DNS 0.19
441 TestNetworkPlugins/group/bridge/Localhost 0.18
442 TestNetworkPlugins/group/bridge/HairPin 0.18
x
+
TestDownloadOnly/v1.28.0/json-events (6.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-521939 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-521939 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.572510992s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1206 10:10:49.817497  488068 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1206 10:10:49.817577  488068 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-521939
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-521939: exit status 85 (91.162835ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-521939 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-521939 │ jenkins │ v1.37.0 │ 06 Dec 25 10:10 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:10:43
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:10:43.291555  488073 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:10:43.291833  488073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:10:43.291879  488073 out.go:374] Setting ErrFile to fd 2...
	I1206 10:10:43.291901  488073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:10:43.292221  488073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	W1206 10:10:43.292413  488073 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22049-484819/.minikube/config/config.json: open /home/jenkins/minikube-integration/22049-484819/.minikube/config/config.json: no such file or directory
	I1206 10:10:43.292879  488073 out.go:368] Setting JSON to true
	I1206 10:10:43.293717  488073 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10395,"bootTime":1765005449,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:10:43.293845  488073 start.go:143] virtualization:  
	I1206 10:10:43.299217  488073 out.go:99] [download-only-521939] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1206 10:10:43.299427  488073 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball: no such file or directory
	I1206 10:10:43.299568  488073 notify.go:221] Checking for updates...
	I1206 10:10:43.302958  488073 out.go:171] MINIKUBE_LOCATION=22049
	I1206 10:10:43.306456  488073 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:10:43.309893  488073 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:10:43.313195  488073 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:10:43.316511  488073 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1206 10:10:43.322668  488073 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 10:10:43.322963  488073 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:10:43.351242  488073 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:10:43.351383  488073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:10:43.409575  488073 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-06 10:10:43.399694102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:10:43.409683  488073 docker.go:319] overlay module found
	I1206 10:10:43.412831  488073 out.go:99] Using the docker driver based on user configuration
	I1206 10:10:43.412867  488073 start.go:309] selected driver: docker
	I1206 10:10:43.412876  488073 start.go:927] validating driver "docker" against <nil>
	I1206 10:10:43.412991  488073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:10:43.466763  488073 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-06 10:10:43.458026483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:10:43.466913  488073 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 10:10:43.467230  488073 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1206 10:10:43.467382  488073 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 10:10:43.470562  488073 out.go:171] Using Docker driver with root privileges
	I1206 10:10:43.473611  488073 cni.go:84] Creating CNI manager for ""
	I1206 10:10:43.473683  488073 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:10:43.473700  488073 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 10:10:43.473782  488073 start.go:353] cluster config:
	{Name:download-only-521939 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-521939 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:10:43.476785  488073 out.go:99] Starting "download-only-521939" primary control-plane node in "download-only-521939" cluster
	I1206 10:10:43.476806  488073 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:10:43.479684  488073 out.go:99] Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:10:43.479718  488073 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1206 10:10:43.479902  488073 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:10:43.500008  488073 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:10:43.500030  488073 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1206 10:10:43.500174  488073 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1206 10:10:43.500270  488073 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1206 10:10:43.537487  488073 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:10:43.537512  488073 cache.go:65] Caching tarball of preloaded images
	I1206 10:10:43.537693  488073 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1206 10:10:43.541261  488073 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1206 10:10:43.541296  488073 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1206 10:10:43.632565  488073 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1206 10:10:43.632696  488073 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-521939 host does not exist
	  To start a cluster, run: "minikube start -p download-only-521939"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-521939
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (5.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-629505 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-629505 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.020396005s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (5.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1206 10:10:55.307372  488068 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1206 10:10:55.307408  488068 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-629505
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-629505: exit status 85 (89.587025ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-521939 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-521939 │ jenkins │ v1.37.0 │ 06 Dec 25 10:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 10:10 UTC │ 06 Dec 25 10:10 UTC │
	│ delete  │ -p download-only-521939                                                                                                                                                   │ download-only-521939 │ jenkins │ v1.37.0 │ 06 Dec 25 10:10 UTC │ 06 Dec 25 10:10 UTC │
	│ start   │ -o=json --download-only -p download-only-629505 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-629505 │ jenkins │ v1.37.0 │ 06 Dec 25 10:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:10:50
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:10:50.332233  488270 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:10:50.332427  488270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:10:50.332457  488270 out.go:374] Setting ErrFile to fd 2...
	I1206 10:10:50.332481  488270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:10:50.332771  488270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:10:50.333273  488270 out.go:368] Setting JSON to true
	I1206 10:10:50.334155  488270 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10402,"bootTime":1765005449,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:10:50.334259  488270 start.go:143] virtualization:  
	I1206 10:10:50.337881  488270 out.go:99] [download-only-629505] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:10:50.338163  488270 notify.go:221] Checking for updates...
	I1206 10:10:50.341168  488270 out.go:171] MINIKUBE_LOCATION=22049
	I1206 10:10:50.344326  488270 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:10:50.347398  488270 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:10:50.350289  488270 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:10:50.353280  488270 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1206 10:10:50.359226  488270 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 10:10:50.359506  488270 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:10:50.385884  488270 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:10:50.385997  488270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:10:50.446247  488270 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-06 10:10:50.436446907 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:10:50.446361  488270 docker.go:319] overlay module found
	I1206 10:10:50.449401  488270 out.go:99] Using the docker driver based on user configuration
	I1206 10:10:50.449441  488270 start.go:309] selected driver: docker
	I1206 10:10:50.449452  488270 start.go:927] validating driver "docker" against <nil>
	I1206 10:10:50.449566  488270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:10:50.508990  488270 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-06 10:10:50.499456033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:10:50.509153  488270 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 10:10:50.509413  488270 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1206 10:10:50.509565  488270 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 10:10:50.512736  488270 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-629505 host does not exist
	  To start a cluster, run: "minikube start -p download-only-629505"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-629505
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (4.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-273530 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-273530 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.313616135s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (4.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1206 10:11:00.071296  488068 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1206 10:11:00.071338  488068 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-273530
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-273530: exit status 85 (193.553353ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-521939 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-521939 │ jenkins │ v1.37.0 │ 06 Dec 25 10:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 10:10 UTC │ 06 Dec 25 10:10 UTC │
	│ delete  │ -p download-only-521939                                                                                                                                                          │ download-only-521939 │ jenkins │ v1.37.0 │ 06 Dec 25 10:10 UTC │ 06 Dec 25 10:10 UTC │
	│ start   │ -o=json --download-only -p download-only-629505 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-629505 │ jenkins │ v1.37.0 │ 06 Dec 25 10:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 10:10 UTC │ 06 Dec 25 10:10 UTC │
	│ delete  │ -p download-only-629505                                                                                                                                                          │ download-only-629505 │ jenkins │ v1.37.0 │ 06 Dec 25 10:10 UTC │ 06 Dec 25 10:10 UTC │
	│ start   │ -o=json --download-only -p download-only-273530 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-273530 │ jenkins │ v1.37.0 │ 06 Dec 25 10:10 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:10:55
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:10:55.806174  488465 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:10:55.806317  488465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:10:55.806344  488465 out.go:374] Setting ErrFile to fd 2...
	I1206 10:10:55.806365  488465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:10:55.806654  488465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:10:55.807079  488465 out.go:368] Setting JSON to true
	I1206 10:10:55.807955  488465 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10407,"bootTime":1765005449,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:10:55.808022  488465 start.go:143] virtualization:  
	I1206 10:10:55.811552  488465 out.go:99] [download-only-273530] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:10:55.811869  488465 notify.go:221] Checking for updates...
	I1206 10:10:55.815894  488465 out.go:171] MINIKUBE_LOCATION=22049
	I1206 10:10:55.819169  488465 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:10:55.822088  488465 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:10:55.825073  488465 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:10:55.828148  488465 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1206 10:10:55.833823  488465 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 10:10:55.834162  488465 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:10:55.856574  488465 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:10:55.856685  488465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:10:55.914325  488465 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-06 10:10:55.905124857 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:10:55.914435  488465 docker.go:319] overlay module found
	I1206 10:10:55.917287  488465 out.go:99] Using the docker driver based on user configuration
	I1206 10:10:55.917336  488465 start.go:309] selected driver: docker
	I1206 10:10:55.917344  488465 start.go:927] validating driver "docker" against <nil>
	I1206 10:10:55.917451  488465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:10:55.985367  488465 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-06 10:10:55.97579986 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:10:55.985543  488465 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 10:10:55.985844  488465 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1206 10:10:55.986014  488465 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 10:10:55.989163  488465 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-273530 host does not exist
	  To start a cluster, run: "minikube start -p download-only-273530"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-273530
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1206 10:11:01.536680  488068 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-243935 --alsologtostderr --binary-mirror http://127.0.0.1:42811 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-243935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-243935
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-463201
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-463201: exit status 85 (75.376313ms)

                                                
                                                
-- stdout --
	* Profile "addons-463201" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-463201"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-463201
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-463201: exit status 85 (72.099761ms)

                                                
                                                
-- stdout --
	* Profile "addons-463201" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-463201"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (152.86s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-463201 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-463201 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m32.858456554s)
--- PASS: TestAddons/Setup (152.86s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-463201 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-463201 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.96s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-463201 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-463201 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2fd70e1e-add3-4040-a62b-6ac6f18a3b36] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2fd70e1e-add3-4040-a62b-6ac6f18a3b36] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.010852828s
addons_test.go:694: (dbg) Run:  kubectl --context addons-463201 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-463201 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-463201 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-463201 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.96s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-463201
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-463201: (12.145740792s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-463201
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-463201
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-463201
--- PASS: TestAddons/StoppedEnableDisable (12.43s)

                                                
                                    
x
+
TestCertOptions (37.17s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-196078 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1206 11:23:35.937240  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-196078 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.26070988s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-196078 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-196078 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-196078 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-196078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-196078
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-196078: (2.135395656s)
--- PASS: TestCertOptions (37.17s)

                                                
                                    
x
+
TestCertExpiration (240.17s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-378339 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1206 11:23:15.605448  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-378339 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (38.184584362s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-378339 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-378339 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (19.515808349s)
helpers_test.go:175: Cleaning up "cert-expiration-378339" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-378339
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-378339: (2.466216367s)
--- PASS: TestCertExpiration (240.17s)

                                                
                                    
x
+
TestForceSystemdFlag (42.03s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-114030 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-114030 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.676252573s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-114030 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-114030" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-114030
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-114030: (2.888133489s)
--- PASS: TestForceSystemdFlag (42.03s)

                                                
                                    
x
+
TestForceSystemdEnv (42.47s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-163342 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-163342 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.518993823s)
helpers_test.go:175: Cleaning up "force-systemd-env-163342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-163342
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-163342: (2.947964095s)
--- PASS: TestForceSystemdEnv (42.47s)

                                                
                                    
x
+
TestErrorSpam/setup (32.67s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-637404 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-637404 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-637404 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-637404 --driver=docker  --container-runtime=crio: (32.666292243s)
--- PASS: TestErrorSpam/setup (32.67s)

                                                
                                    
x
+
TestErrorSpam/start (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 start --dry-run
--- PASS: TestErrorSpam/start (0.82s)

                                                
                                    
x
+
TestErrorSpam/status (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 status
--- PASS: TestErrorSpam/status (1.15s)

                                                
                                    
x
+
TestErrorSpam/pause (6.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 pause: exit status 80 (2.281135152s)

                                                
                                                
-- stdout --
	* Pausing node nospam-637404 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:17:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 pause: exit status 80 (2.463327152s)

                                                
                                                
-- stdout --
	* Pausing node nospam-637404 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:17:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 pause: exit status 80 (1.886980364s)

                                                
                                                
-- stdout --
	* Pausing node nospam-637404 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:17:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 unpause: exit status 80 (1.91577224s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-637404 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:17:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 unpause: exit status 80 (1.522379899s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-637404 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:17:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 unpause: exit status 80 (2.323199868s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-637404 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:17:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.76s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 stop: (1.315917871s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637404 --log_dir /tmp/nospam-637404 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.11s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-137526 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1206 10:18:35.939108  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:18:35.945718  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:18:35.957173  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:18:35.978637  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:18:36.020180  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:18:36.101762  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:18:36.263263  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:18:36.584714  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:18:37.226805  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:18:38.508287  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:18:41.071110  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-137526 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (47.105671989s)
--- PASS: TestFunctional/serial/StartWithProxy (47.11s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.05s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1206 10:18:46.015238  488068 config.go:182] Loaded profile config "functional-137526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-137526 --alsologtostderr -v=8
E1206 10:18:46.192937  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:18:56.435742  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-137526 --alsologtostderr -v=8: (30.044285002s)
functional_test.go:678: soft start took 30.044844695s for "functional-137526" cluster.
I1206 10:19:16.059843  488068 config.go:182] Loaded profile config "functional-137526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (30.05s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-137526 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 cache add registry.k8s.io/pause:3.1
E1206 10:19:16.917157  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-137526 cache add registry.k8s.io/pause:3.1: (1.193282489s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-137526 cache add registry.k8s.io/pause:3.3: (1.187216782s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-137526 cache add registry.k8s.io/pause:latest: (1.123099299s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-137526 /tmp/TestFunctionalserialCacheCmdcacheadd_local1161449748/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 cache add minikube-local-cache-test:functional-137526
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 cache delete minikube-local-cache-test:functional-137526
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-137526
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-137526 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (299.761181ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 kubectl -- --context functional-137526 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-137526 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.42s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-137526 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1206 10:19:57.878619  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-137526 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.416005338s)
functional_test.go:776: restart took 37.416097011s for "functional-137526" cluster.
I1206 10:20:01.049938  488068 config.go:182] Loaded profile config "functional-137526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (37.42s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-137526 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-137526 logs: (1.509126651s)
--- PASS: TestFunctional/serial/LogsCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 logs --file /tmp/TestFunctionalserialLogsFileCmd3700712283/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-137526 logs --file /tmp/TestFunctionalserialLogsFileCmd3700712283/001/logs.txt: (1.60422847s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.61s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.92s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-137526 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-137526
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-137526: exit status 115 (395.702476ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32540 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-137526 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-137526 config get cpus: exit status 14 (90.770009ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-137526 config get cpus: exit status 14 (75.295385ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-137526 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-137526 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 515487: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.27s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-137526 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-137526 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (187.369794ms)

                                                
                                                
-- stdout --
	* [functional-137526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:20:47.665522  514840 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:20:47.665672  514840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:20:47.665685  514840 out.go:374] Setting ErrFile to fd 2...
	I1206 10:20:47.665713  514840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:20:47.666009  514840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:20:47.666424  514840 out.go:368] Setting JSON to false
	I1206 10:20:47.667437  514840 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10999,"bootTime":1765005449,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:20:47.667515  514840 start.go:143] virtualization:  
	I1206 10:20:47.671300  514840 out.go:179] * [functional-137526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:20:47.674447  514840 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:20:47.674534  514840 notify.go:221] Checking for updates...
	I1206 10:20:47.680339  514840 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:20:47.683250  514840 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:20:47.686091  514840 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:20:47.688960  514840 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:20:47.691809  514840 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:20:47.695305  514840 config.go:182] Loaded profile config "functional-137526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:20:47.695942  514840 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:20:47.717650  514840 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:20:47.717778  514840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:20:47.779453  514840 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-06 10:20:47.768847985 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:20:47.779554  514840 docker.go:319] overlay module found
	I1206 10:20:47.782751  514840 out.go:179] * Using the docker driver based on existing profile
	I1206 10:20:47.785621  514840 start.go:309] selected driver: docker
	I1206 10:20:47.785644  514840 start.go:927] validating driver "docker" against &{Name:functional-137526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-137526 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:20:47.785748  514840 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:20:47.789315  514840 out.go:203] 
	W1206 10:20:47.792253  514840 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 10:20:47.795178  514840 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-137526 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-137526 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-137526 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (263.735593ms)

                                                
                                                
-- stdout --
	* [functional-137526] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:20:48.207605  515025 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:20:48.207723  515025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:20:48.207732  515025 out.go:374] Setting ErrFile to fd 2...
	I1206 10:20:48.207737  515025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:20:48.208118  515025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:20:48.208490  515025 out.go:368] Setting JSON to false
	I1206 10:20:48.209385  515025 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11000,"bootTime":1765005449,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:20:48.209458  515025 start.go:143] virtualization:  
	I1206 10:20:48.214669  515025 out.go:179] * [functional-137526] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1206 10:20:48.218011  515025 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:20:48.218137  515025 notify.go:221] Checking for updates...
	I1206 10:20:48.224130  515025 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:20:48.227088  515025 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:20:48.230084  515025 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:20:48.233077  515025 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:20:48.236123  515025 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:20:48.239509  515025 config.go:182] Loaded profile config "functional-137526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:20:48.240098  515025 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:20:48.277793  515025 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:20:48.277929  515025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:20:48.374526  515025 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-06 10:20:48.361937703 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:20:48.374641  515025 docker.go:319] overlay module found
	I1206 10:20:48.377674  515025 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 10:20:48.380746  515025 start.go:309] selected driver: docker
	I1206 10:20:48.380776  515025 start.go:927] validating driver "docker" against &{Name:functional-137526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-137526 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:20:48.380884  515025 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:20:48.384320  515025 out.go:203] 
	W1206 10:20:48.387260  515025 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 10:20:48.391927  515025 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-137526 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-137526 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-xmd27" [a992f788-3e8a-4570-9130-7d3aa52e5bc4] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-xmd27" [a992f788-3e8a-4570-9130-7d3aa52e5bc4] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.00350849s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30916
functional_test.go:1680: http://192.168.49.2:30916: success! body:
Request served by hello-node-connect-7d85dfc575-xmd27

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30916
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [acfd8d05-1c2c-4377-8d12-15e765dda0ba] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003961028s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-137526 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-137526 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-137526 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-137526 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [5335405c-ba1d-4ca0-8f39-6ed6da214ea0] Pending
helpers_test.go:352: "sp-pod" [5335405c-ba1d-4ca0-8f39-6ed6da214ea0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [5335405c-ba1d-4ca0-8f39-6ed6da214ea0] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.009695847s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-137526 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-137526 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-137526 delete -f testdata/storage-provisioner/pod.yaml: (1.271835429s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-137526 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b7254d83-c007-4010-9887-da274094ebdc] Pending
helpers_test.go:352: "sp-pod" [b7254d83-c007-4010-9887-da274094ebdc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [b7254d83-c007-4010-9887-da274094ebdc] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004082466s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-137526 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.45s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh -n functional-137526 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 cp functional-137526:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2339877826/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh -n functional-137526 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh -n functional-137526 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/488068/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "sudo cat /etc/test/nested/copy/488068/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/488068.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "sudo cat /etc/ssl/certs/488068.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/488068.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "sudo cat /usr/share/ca-certificates/488068.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4880682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "sudo cat /etc/ssl/certs/4880682.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4880682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "sudo cat /usr/share/ca-certificates/4880682.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-137526 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-137526 ssh "sudo systemctl is-active docker": exit status 1 (332.62832ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-137526 ssh "sudo systemctl is-active containerd": exit status 1 (342.063756ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-137526 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-137526
localhost/kicbase/echo-server:functional-137526
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-137526 image ls --format short --alsologtostderr:
I1206 10:20:50.756438  515617 out.go:360] Setting OutFile to fd 1 ...
I1206 10:20:50.756552  515617 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:20:50.756597  515617 out.go:374] Setting ErrFile to fd 2...
I1206 10:20:50.756603  515617 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:20:50.756859  515617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
I1206 10:20:50.757490  515617 config.go:182] Loaded profile config "functional-137526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 10:20:50.757617  515617 config.go:182] Loaded profile config "functional-137526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 10:20:50.766240  515617 cli_runner.go:164] Run: docker container inspect functional-137526 --format={{.State.Status}}
I1206 10:20:50.784871  515617 ssh_runner.go:195] Run: systemctl --version
I1206 10:20:50.784921  515617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-137526
I1206 10:20:50.804595  515617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-137526/id_rsa Username:docker}
I1206 10:20:50.922347  515617 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-137526 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server           │ latest             │ ce2d2cda2d858 │ 4.79MB │
│ localhost/kicbase/echo-server           │ functional-137526  │ ce2d2cda2d858 │ 4.79MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 1b34917560f09 │ 72.6MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ b178af3d91f80 │ 84.8MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ localhost/minikube-local-cache-test     │ functional-137526  │ 1276d23f1e1a0 │ 3.33kB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 4f982e73e768a │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ docker.io/library/nginx                 │ latest             │ bb747ca923a5e │ 176MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 94bff1bec29fd │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-137526 image ls --format table --alsologtostderr:
I1206 10:20:56.444488  515950 out.go:360] Setting OutFile to fd 1 ...
I1206 10:20:56.444683  515950 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:20:56.444710  515950 out.go:374] Setting ErrFile to fd 2...
I1206 10:20:56.444734  515950 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:20:56.445083  515950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
I1206 10:20:56.445755  515950 config.go:182] Loaded profile config "functional-137526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 10:20:56.445935  515950 config.go:182] Loaded profile config "functional-137526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 10:20:56.446511  515950 cli_runner.go:164] Run: docker container inspect functional-137526 --format={{.State.Status}}
I1206 10:20:56.469726  515950 ssh_runner.go:195] Run: systemctl --version
I1206 10:20:56.469784  515950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-137526
I1206 10:20:56.490420  515950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-137526/id_rsa Username:docker}
I1206 10:20:56.616644  515950 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-137526 image ls --format json --alsologtostderr:
[{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.i
o/library/nginx:alpine"],"size":"54837949"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84","registry.k8s.io/kube-apiserver@sha256:e009ef63deaf
797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"84753391"},{"id":"4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe","registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"51592021"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c46604644
72045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"75941783"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d79
5cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-137526"],"size":"4788229"},{"id":"bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712"],"repoTags":["docker.io/library/nginx:latest"],"size":"175943180"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd1
4aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"1276d23f1e1a0925da9c89d6e0ce9c53c0dc33e991879b35be5778ca1ec8c6cf","repoDigests":["localhost/minikube-local-cache-test@sha256:6a0caa4c1af2161ef610bb873a2b82e1a9988a94cae0f305c031005a66f135e5"],"repoTags":["localhost/minikube-local-cache-test:functional-137526"],"size":"3330"},{"id":"1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51a
c02fe44900b4a224031df89","registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"72629077"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-137526 image ls --format json --alsologtostderr:
I1206 10:20:55.766643  515863 out.go:360] Setting OutFile to fd 1 ...
I1206 10:20:55.767509  515863 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:20:55.767553  515863 out.go:374] Setting ErrFile to fd 2...
I1206 10:20:55.767575  515863 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:20:55.767914  515863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
I1206 10:20:55.768604  515863 config.go:182] Loaded profile config "functional-137526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 10:20:55.768796  515863 config.go:182] Loaded profile config "functional-137526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 10:20:55.769377  515863 cli_runner.go:164] Run: docker container inspect functional-137526 --format={{.State.Status}}
I1206 10:20:55.790378  515863 ssh_runner.go:195] Run: systemctl --version
I1206 10:20:55.790449  515863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-137526
I1206 10:20:55.823823  515863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-137526/id_rsa Username:docker}
I1206 10:20:56.025708  515863 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-137526 image ls --format yaml --alsologtostderr:
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "84753391"
- id: 4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "51592021"
- id: bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712
repoTags:
- docker.io/library/nginx:latest
size: "175943180"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 1276d23f1e1a0925da9c89d6e0ce9c53c0dc33e991879b35be5778ca1ec8c6cf
repoDigests:
- localhost/minikube-local-cache-test@sha256:6a0caa4c1af2161ef610bb873a2b82e1a9988a94cae0f305c031005a66f135e5
repoTags:
- localhost/minikube-local-cache-test:functional-137526
size: "3330"
- id: 94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "75941783"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "72629077"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-137526
size: "4788229"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-137526 image ls --format yaml --alsologtostderr:
I1206 10:20:51.037823  515656 out.go:360] Setting OutFile to fd 1 ...
I1206 10:20:51.037992  515656 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:20:51.038006  515656 out.go:374] Setting ErrFile to fd 2...
I1206 10:20:51.038013  515656 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:20:51.038417  515656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
I1206 10:20:51.039077  515656 config.go:182] Loaded profile config "functional-137526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 10:20:51.040465  515656 config.go:182] Loaded profile config "functional-137526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 10:20:51.041299  515656 cli_runner.go:164] Run: docker container inspect functional-137526 --format={{.State.Status}}
I1206 10:20:51.063535  515656 ssh_runner.go:195] Run: systemctl --version
I1206 10:20:51.063684  515656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-137526
I1206 10:20:51.095024  515656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-137526/id_rsa Username:docker}
I1206 10:20:51.211365  515656 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-137526 ssh pgrep buildkitd: exit status 1 (395.07057ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 image build -t localhost/my-image:functional-137526 testdata/build --alsologtostderr
2025/12/06 10:20:55 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-137526 image build -t localhost/my-image:functional-137526 testdata/build --alsologtostderr: (5.588413773s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-137526 image build -t localhost/my-image:functional-137526 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4dd18bbd9c3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-137526
--> 7ef8dd410a9
Successfully tagged localhost/my-image:functional-137526
7ef8dd410a9f4ceaa7eee66a4dc347564716aa5ad87c5d9e99f73cce522cead9
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-137526 image build -t localhost/my-image:functional-137526 testdata/build --alsologtostderr:
I1206 10:20:51.734889  515758 out.go:360] Setting OutFile to fd 1 ...
I1206 10:20:51.735838  515758 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:20:51.735933  515758 out.go:374] Setting ErrFile to fd 2...
I1206 10:20:51.735968  515758 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:20:51.736494  515758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
I1206 10:20:51.737522  515758 config.go:182] Loaded profile config "functional-137526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 10:20:51.738635  515758 config.go:182] Loaded profile config "functional-137526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 10:20:51.739399  515758 cli_runner.go:164] Run: docker container inspect functional-137526 --format={{.State.Status}}
I1206 10:20:51.768678  515758 ssh_runner.go:195] Run: systemctl --version
I1206 10:20:51.768739  515758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-137526
I1206 10:20:51.796671  515758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-137526/id_rsa Username:docker}
I1206 10:20:51.911850  515758 build_images.go:162] Building image from path: /tmp/build.2917746406.tar
I1206 10:20:51.911941  515758 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 10:20:51.922062  515758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2917746406.tar
I1206 10:20:51.927915  515758 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2917746406.tar: stat -c "%s %y" /var/lib/minikube/build/build.2917746406.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2917746406.tar': No such file or directory
I1206 10:20:51.927944  515758 ssh_runner.go:362] scp /tmp/build.2917746406.tar --> /var/lib/minikube/build/build.2917746406.tar (3072 bytes)
I1206 10:20:51.954791  515758 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2917746406
I1206 10:20:51.965239  515758 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2917746406 -xf /var/lib/minikube/build/build.2917746406.tar
I1206 10:20:51.976141  515758 crio.go:315] Building image: /var/lib/minikube/build/build.2917746406
I1206 10:20:51.976258  515758 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-137526 /var/lib/minikube/build/build.2917746406 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1206 10:20:57.225657  515758 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-137526 /var/lib/minikube/build/build.2917746406 --cgroup-manager=cgroupfs: (5.249350215s)
I1206 10:20:57.225726  515758 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2917746406
I1206 10:20:57.233661  515758 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2917746406.tar
I1206 10:20:57.241827  515758 build_images.go:218] Built localhost/my-image:functional-137526 from /tmp/build.2917746406.tar
I1206 10:20:57.241857  515758 build_images.go:134] succeeded building to: functional-137526
I1206 10:20:57.241862  515758 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-137526
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 image load --daemon kicbase/echo-server:functional-137526 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-137526 image load --daemon kicbase/echo-server:functional-137526 --alsologtostderr: (1.288614341s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 image load --daemon kicbase/echo-server:functional-137526 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-137526 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-137526 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-88ldk" [087780cd-0285-4b9f-b977-59d2782ec56d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-88ldk" [087780cd-0285-4b9f-b977-59d2782ec56d] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004503175s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-137526
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 image load --daemon kicbase/echo-server:functional-137526 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 image save kicbase/echo-server:functional-137526 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 image rm kicbase/echo-server:functional-137526 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-137526
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 image save --daemon kicbase/echo-server:functional-137526 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-137526
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-137526 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-137526 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-137526 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 512084: os: process already finished
helpers_test.go:525: unable to kill pid 511956: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-137526 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-137526 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-137526 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [8b14a99f-28a5-4519-ba68-ff0aecedf381] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [8b14a99f-28a5-4519-ba68-ff0aecedf381] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004028613s
I1206 10:20:26.789378  488068 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 service list -o json
functional_test.go:1504: Took "366.087867ms" to run "out/minikube-linux-arm64 -p functional-137526 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30255
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30255
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-137526 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.109.146 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-137526 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "361.31189ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "65.711145ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "390.131881ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "59.73059ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-137526 /tmp/TestFunctionalparallelMountCmdany-port3743209006/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765016437214673254" to /tmp/TestFunctionalparallelMountCmdany-port3743209006/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765016437214673254" to /tmp/TestFunctionalparallelMountCmdany-port3743209006/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765016437214673254" to /tmp/TestFunctionalparallelMountCmdany-port3743209006/001/test-1765016437214673254
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-137526 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (343.577825ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 10:20:37.558535  488068 retry.go:31] will retry after 736.76736ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 10:20 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 10:20 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 10:20 test-1765016437214673254
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh cat /mount-9p/test-1765016437214673254
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-137526 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [09fd7e40-d913-446c-bf9f-84145f099a84] Pending
helpers_test.go:352: "busybox-mount" [09fd7e40-d913-446c-bf9f-84145f099a84] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [09fd7e40-d913-446c-bf9f-84145f099a84] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [09fd7e40-d913-446c-bf9f-84145f099a84] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003188315s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-137526 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-137526 /tmp/TestFunctionalparallelMountCmdany-port3743209006/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-137526 /tmp/TestFunctionalparallelMountCmdspecific-port714690604/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-137526 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (348.06702ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 10:20:44.886638  488068 retry.go:31] will retry after 683.3372ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-137526 /tmp/TestFunctionalparallelMountCmdspecific-port714690604/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-137526 ssh "sudo umount -f /mount-9p": exit status 1 (286.353102ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-137526 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-137526 /tmp/TestFunctionalparallelMountCmdspecific-port714690604/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-137526 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2137041211/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-137526 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2137041211/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-137526 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2137041211/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-137526 ssh "findmnt -T" /mount1: exit status 1 (553.482899ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 10:20:47.213014  488068 retry.go:31] will retry after 564.24713ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-137526 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-137526 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-137526 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2137041211/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-137526 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2137041211/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-137526 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2137041211/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.30s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-137526
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-137526
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-137526
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22049-484819/.minikube/files/etc/test/nested/copy/488068/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-123579 cache add registry.k8s.io/pause:3.1: (1.197245318s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-123579 cache add registry.k8s.io/pause:3.3: (1.109256869s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-123579 cache add registry.k8s.io/pause:latest: (1.159768994s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3876589484/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 cache add minikube-local-cache-test:functional-123579
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 cache delete minikube-local-cache-test:functional-123579
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-123579
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-123579 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (309.703211ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1621562661/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-123579 config get cpus: exit status 14 (102.066519ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-123579 config get cpus: exit status 14 (59.094317ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-123579 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-123579 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (186.945901ms)

                                                
                                                
-- stdout --
	* [functional-123579] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:50:15.788553  546996 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:50:15.788742  546996 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:50:15.788768  546996 out.go:374] Setting ErrFile to fd 2...
	I1206 10:50:15.788791  546996 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:50:15.789091  546996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:50:15.789503  546996 out.go:368] Setting JSON to false
	I1206 10:50:15.790378  546996 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":12767,"bootTime":1765005449,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:50:15.790478  546996 start.go:143] virtualization:  
	I1206 10:50:15.794274  546996 out.go:179] * [functional-123579] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:50:15.798171  546996 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:50:15.798276  546996 notify.go:221] Checking for updates...
	I1206 10:50:15.804298  546996 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:50:15.807294  546996 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:50:15.810235  546996 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:50:15.813209  546996 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:50:15.816160  546996 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:50:15.819761  546996 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:50:15.820407  546996 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:50:15.844832  546996 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:50:15.844944  546996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:50:15.901652  546996 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:50:15.892107642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:50:15.901759  546996 docker.go:319] overlay module found
	I1206 10:50:15.904952  546996 out.go:179] * Using the docker driver based on existing profile
	I1206 10:50:15.907874  546996 start.go:309] selected driver: docker
	I1206 10:50:15.907891  546996 start.go:927] validating driver "docker" against &{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:50:15.908002  546996 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:50:15.911497  546996 out.go:203] 
	W1206 10:50:15.914212  546996 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 10:50:15.917047  546996 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-123579 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-123579 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-123579 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (185.655353ms)

                                                
                                                
-- stdout --
	* [functional-123579] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:50:16.228088  547112 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:50:16.228226  547112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:50:16.228235  547112 out.go:374] Setting ErrFile to fd 2...
	I1206 10:50:16.228242  547112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:50:16.228610  547112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:50:16.229021  547112 out.go:368] Setting JSON to false
	I1206 10:50:16.229905  547112 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":12768,"bootTime":1765005449,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 10:50:16.229980  547112 start.go:143] virtualization:  
	I1206 10:50:16.233324  547112 out.go:179] * [functional-123579] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1206 10:50:16.236330  547112 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:50:16.236404  547112 notify.go:221] Checking for updates...
	I1206 10:50:16.242247  547112 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:50:16.245134  547112 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 10:50:16.248006  547112 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 10:50:16.250882  547112 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:50:16.253750  547112 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:50:16.256997  547112 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:50:16.257560  547112 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:50:16.278739  547112 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:50:16.278856  547112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:50:16.342153  547112 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:50:16.332904034 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:50:16.342267  547112 docker.go:319] overlay module found
	I1206 10:50:16.345362  547112 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 10:50:16.348240  547112 start.go:309] selected driver: docker
	I1206 10:50:16.348265  547112 start.go:927] validating driver "docker" against &{Name:functional-123579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-123579 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:50:16.348367  547112 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:50:16.351933  547112 out.go:203] 
	W1206 10:50:16.354791  547112 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 10:50:16.357639  547112 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh -n functional-123579 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 cp functional-123579:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2311564542/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh -n functional-123579 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh -n functional-123579 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/488068/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "sudo cat /etc/test/nested/copy/488068/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/488068.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "sudo cat /etc/ssl/certs/488068.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/488068.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "sudo cat /usr/share/ca-certificates/488068.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4880682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "sudo cat /etc/ssl/certs/4880682.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4880682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "sudo cat /usr/share/ca-certificates/4880682.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-123579 ssh "sudo systemctl is-active docker": exit status 1 (325.977861ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-123579 ssh "sudo systemctl is-active containerd": exit status 1 (392.289035ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-123579 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-123579
localhost/kicbase/echo-server:functional-123579
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-123579 image ls --format short --alsologtostderr:
I1206 10:50:19.240658  547760 out.go:360] Setting OutFile to fd 1 ...
I1206 10:50:19.240822  547760 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:50:19.240846  547760 out.go:374] Setting ErrFile to fd 2...
I1206 10:50:19.240859  547760 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:50:19.241137  547760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
I1206 10:50:19.241757  547760 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 10:50:19.241891  547760 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 10:50:19.242386  547760 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
I1206 10:50:19.259818  547760 ssh_runner.go:195] Run: systemctl --version
I1206 10:50:19.259886  547760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
I1206 10:50:19.277599  547760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
I1206 10:50:19.381778  547760 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-123579 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/my-image                      │ functional-123579  │ 6dd70e736aee7 │ 1.64MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ ccd634d9bcc36 │ 85MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 68b5f775f1876 │ 72.2MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 16378741539f1 │ 49.8MB │
│ localhost/kicbase/echo-server           │ functional-123579  │ ce2d2cda2d858 │ 4.79MB │
│ localhost/minikube-local-cache-test     │ functional-123579  │ 1276d23f1e1a0 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 404c2e1286177 │ 74.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-123579 image ls --format table --alsologtostderr:
I1206 10:50:23.874263  548254 out.go:360] Setting OutFile to fd 1 ...
I1206 10:50:23.874390  548254 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:50:23.874401  548254 out.go:374] Setting ErrFile to fd 2...
I1206 10:50:23.874406  548254 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:50:23.874667  548254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
I1206 10:50:23.875367  548254 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 10:50:23.875512  548254 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 10:50:23.876038  548254 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
I1206 10:50:23.896285  548254 ssh_runner.go:195] Run: systemctl --version
I1206 10:50:23.896344  548254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
I1206 10:50:23.914689  548254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
I1206 10:50:24.026515  548254 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-123579 image ls --format json --alsologtostderr:
[{"id":"68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"72170325"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"44f765e2bb0193533533d43d76bd8f7199f6854416c15da76639b414789ad274","repoDigests":["docker.io/library/080dca1706650c0a83719aa46ae751200c985b644238307c357520bed648f8e4-tmp@sha256:e07c3b16094e43ecfcb8d64f64ef461d3da22e20b12eb4c94679addb6fed4765"],"repoTags":[],"size":"1638179"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigest
s":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478","registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"74106775"},{"id":"16378741539f1be9c6e347d127537d3
79a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"49822549"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc3
33c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","rep
oDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-123579"],"size":"4788229"},{"id":"1276d23f1e1a0925da9c89d6e0ce9c53c0dc33e991879b35be5778ca1ec8c6cf","repoDigests":["localhost/minikube-local-cache-test@sha256:6a0caa4c1af2161ef610bb873a2b82e1a9988a94cae0f305c031005a66f135e5"],"repoTags":["localhost/minikube-local-cache-test:functional-123579"],"size":"3330"},{"id":"6dd70e736aee7f9df947f555c57f88f90f839c286a8424453ea51a145d9e5f2b","repoDigests":["localhost/my-image@sha256:cb1a296a069ee7c23a8c521c577ce5b8e0a7c09b12f41a59b416db473e98c1c7"],"repoTags":["localhost/my-image:functional-123579"],"size":"1640790"},{"id":"ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cd
b6d1"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"84949999"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-123579 image ls --format json --alsologtostderr:
I1206 10:50:23.636957  548218 out.go:360] Setting OutFile to fd 1 ...
I1206 10:50:23.637129  548218 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:50:23.637141  548218 out.go:374] Setting ErrFile to fd 2...
I1206 10:50:23.637147  548218 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:50:23.637434  548218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
I1206 10:50:23.638097  548218 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 10:50:23.638272  548218 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 10:50:23.638871  548218 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
I1206 10:50:23.656383  548218 ssh_runner.go:195] Run: systemctl --version
I1206 10:50:23.656445  548218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
I1206 10:50:23.675550  548218 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
I1206 10:50:23.782549  548218 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-123579 image ls --format yaml --alsologtostderr:
- id: 404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "74106775"
- id: 16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "49822549"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-123579
size: "4788229"
- id: 1276d23f1e1a0925da9c89d6e0ce9c53c0dc33e991879b35be5778ca1ec8c6cf
repoDigests:
- localhost/minikube-local-cache-test@sha256:6a0caa4c1af2161ef610bb873a2b82e1a9988a94cae0f305c031005a66f135e5
repoTags:
- localhost/minikube-local-cache-test:functional-123579
size: "3330"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "72170325"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "84949999"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-123579 image ls --format yaml --alsologtostderr:
I1206 10:50:19.488990  547802 out.go:360] Setting OutFile to fd 1 ...
I1206 10:50:19.489129  547802 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:50:19.489140  547802 out.go:374] Setting ErrFile to fd 2...
I1206 10:50:19.489145  547802 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:50:19.489407  547802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
I1206 10:50:19.490013  547802 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 10:50:19.490140  547802 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 10:50:19.490631  547802 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
I1206 10:50:19.507336  547802 ssh_runner.go:195] Run: systemctl --version
I1206 10:50:19.507409  547802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
I1206 10:50:19.526514  547802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
I1206 10:50:19.629556  547802 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-123579 ssh pgrep buildkitd: exit status 1 (274.477693ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 image build -t localhost/my-image:functional-123579 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-123579 image build -t localhost/my-image:functional-123579 testdata/build --alsologtostderr: (3.41285111s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-123579 image build -t localhost/my-image:functional-123579 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 44f765e2bb0
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-123579
--> 6dd70e736ae
Successfully tagged localhost/my-image:functional-123579
6dd70e736aee7f9df947f555c57f88f90f839c286a8424453ea51a145d9e5f2b
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-123579 image build -t localhost/my-image:functional-123579 testdata/build --alsologtostderr:
I1206 10:50:19.992937  547902 out.go:360] Setting OutFile to fd 1 ...
I1206 10:50:19.993129  547902 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:50:19.993153  547902 out.go:374] Setting ErrFile to fd 2...
I1206 10:50:19.993176  547902 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:50:19.993462  547902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
I1206 10:50:19.994150  547902 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 10:50:19.994897  547902 config.go:182] Loaded profile config "functional-123579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 10:50:19.995528  547902 cli_runner.go:164] Run: docker container inspect functional-123579 --format={{.State.Status}}
I1206 10:50:20.019819  547902 ssh_runner.go:195] Run: systemctl --version
I1206 10:50:20.019888  547902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-123579
I1206 10:50:20.039537  547902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/functional-123579/id_rsa Username:docker}
I1206 10:50:20.145942  547902 build_images.go:162] Building image from path: /tmp/build.1766110013.tar
I1206 10:50:20.146011  547902 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 10:50:20.154243  547902 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1766110013.tar
I1206 10:50:20.158272  547902 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1766110013.tar: stat -c "%s %y" /var/lib/minikube/build/build.1766110013.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1766110013.tar': No such file or directory
I1206 10:50:20.158308  547902 ssh_runner.go:362] scp /tmp/build.1766110013.tar --> /var/lib/minikube/build/build.1766110013.tar (3072 bytes)
I1206 10:50:20.177657  547902 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1766110013
I1206 10:50:20.188672  547902 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1766110013 -xf /var/lib/minikube/build/build.1766110013.tar
I1206 10:50:20.198581  547902 crio.go:315] Building image: /var/lib/minikube/build/build.1766110013
I1206 10:50:20.198651  547902 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-123579 /var/lib/minikube/build/build.1766110013 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1206 10:50:23.331153  547902 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-123579 /var/lib/minikube/build/build.1766110013 --cgroup-manager=cgroupfs: (3.132478631s)
I1206 10:50:23.331224  547902 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1766110013
I1206 10:50:23.338822  547902 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1766110013.tar
I1206 10:50:23.346385  547902 build_images.go:218] Built localhost/my-image:functional-123579 from /tmp/build.1766110013.tar
I1206 10:50:23.346415  547902 build_images.go:134] succeeded building to: functional-123579
I1206 10:50:23.346420  547902 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-123579
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 image load --daemon kicbase/echo-server:functional-123579 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-123579 image load --daemon kicbase/echo-server:functional-123579 --alsologtostderr: (1.26083604s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 image load --daemon kicbase/echo-server:functional-123579 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-123579
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 image load --daemon kicbase/echo-server:functional-123579 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 image save kicbase/echo-server:functional-123579 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 image rm kicbase/echo-server:functional-123579 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-123579
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 image save --daemon kicbase/echo-server:functional-123579 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-123579
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-123579 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-123579 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "330.070571ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "52.418112ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "336.797454ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "49.280097ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1745843341/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-123579 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (370.456908ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 10:50:11.902441  488068 retry.go:31] will retry after 504.255962ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1745843341/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "sudo umount -f /mount-9p"
E1206 10:50:13.255337  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-123579 ssh "sudo umount -f /mount-9p": exit status 1 (278.730605ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-123579 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1745843341/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1916336964/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1916336964/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1916336964/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-123579 ssh "findmnt -T" /mount1: exit status 1 (573.120908ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 10:50:14.069928  488068 retry.go:31] will retry after 713.011006ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-123579 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-123579 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1916336964/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1916336964/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-123579 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1916336964/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-123579
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-123579
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-123579
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (204.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1206 10:53:15.605032  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:53:15.611479  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:53:15.622835  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:53:15.644205  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:53:15.685562  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:53:15.767008  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:53:15.928478  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:53:16.250240  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:53:16.892062  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:53:18.173514  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:53:20.735950  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:53:25.857920  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:53:35.937348  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:53:36.099840  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:53:56.582157  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:54:37.544371  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:55:13.255621  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-747644 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m23.953562753s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (204.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-747644 kubectl -- rollout status deployment/busybox: (6.392681116s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 kubectl -- exec busybox-7b57f96db7-4s7ds -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 kubectl -- exec busybox-7b57f96db7-bbhwd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 kubectl -- exec busybox-7b57f96db7-jfm4c -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 kubectl -- exec busybox-7b57f96db7-4s7ds -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 kubectl -- exec busybox-7b57f96db7-bbhwd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 kubectl -- exec busybox-7b57f96db7-jfm4c -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 kubectl -- exec busybox-7b57f96db7-4s7ds -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 kubectl -- exec busybox-7b57f96db7-bbhwd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 kubectl -- exec busybox-7b57f96db7-jfm4c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 kubectl -- exec busybox-7b57f96db7-4s7ds -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 kubectl -- exec busybox-7b57f96db7-4s7ds -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 kubectl -- exec busybox-7b57f96db7-bbhwd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 kubectl -- exec busybox-7b57f96db7-bbhwd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 kubectl -- exec busybox-7b57f96db7-jfm4c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 kubectl -- exec busybox-7b57f96db7-jfm4c -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 node add --alsologtostderr -v 5
E1206 10:55:59.466571  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-747644 node add --alsologtostderr -v 5: (58.140693998s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-747644 status --alsologtostderr -v 5: (1.068480187s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-747644 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.050844995s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-747644 status --output json --alsologtostderr -v 5: (1.071386788s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 cp testdata/cp-test.txt ha-747644:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 cp ha-747644:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1950661346/001/cp-test_ha-747644.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 cp ha-747644:/home/docker/cp-test.txt ha-747644-m02:/home/docker/cp-test_ha-747644_ha-747644-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m02 "sudo cat /home/docker/cp-test_ha-747644_ha-747644-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 cp ha-747644:/home/docker/cp-test.txt ha-747644-m03:/home/docker/cp-test_ha-747644_ha-747644-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m03 "sudo cat /home/docker/cp-test_ha-747644_ha-747644-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 cp ha-747644:/home/docker/cp-test.txt ha-747644-m04:/home/docker/cp-test_ha-747644_ha-747644-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m04 "sudo cat /home/docker/cp-test_ha-747644_ha-747644-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 cp testdata/cp-test.txt ha-747644-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 cp ha-747644-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1950661346/001/cp-test_ha-747644-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 cp ha-747644-m02:/home/docker/cp-test.txt ha-747644:/home/docker/cp-test_ha-747644-m02_ha-747644.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644 "sudo cat /home/docker/cp-test_ha-747644-m02_ha-747644.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 cp ha-747644-m02:/home/docker/cp-test.txt ha-747644-m03:/home/docker/cp-test_ha-747644-m02_ha-747644-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m03 "sudo cat /home/docker/cp-test_ha-747644-m02_ha-747644-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 cp ha-747644-m02:/home/docker/cp-test.txt ha-747644-m04:/home/docker/cp-test_ha-747644-m02_ha-747644-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m04 "sudo cat /home/docker/cp-test_ha-747644-m02_ha-747644-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 cp testdata/cp-test.txt ha-747644-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 cp ha-747644-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1950661346/001/cp-test_ha-747644-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 cp ha-747644-m03:/home/docker/cp-test.txt ha-747644:/home/docker/cp-test_ha-747644-m03_ha-747644.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644 "sudo cat /home/docker/cp-test_ha-747644-m03_ha-747644.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 cp ha-747644-m03:/home/docker/cp-test.txt ha-747644-m02:/home/docker/cp-test_ha-747644-m03_ha-747644-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m02 "sudo cat /home/docker/cp-test_ha-747644-m03_ha-747644-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 cp ha-747644-m03:/home/docker/cp-test.txt ha-747644-m04:/home/docker/cp-test_ha-747644-m03_ha-747644-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m04 "sudo cat /home/docker/cp-test_ha-747644-m03_ha-747644-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 cp testdata/cp-test.txt ha-747644-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 cp ha-747644-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1950661346/001/cp-test_ha-747644-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 cp ha-747644-m04:/home/docker/cp-test.txt ha-747644:/home/docker/cp-test_ha-747644-m04_ha-747644.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644 "sudo cat /home/docker/cp-test_ha-747644-m04_ha-747644.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 cp ha-747644-m04:/home/docker/cp-test.txt ha-747644-m02:/home/docker/cp-test_ha-747644-m04_ha-747644-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m02 "sudo cat /home/docker/cp-test_ha-747644-m04_ha-747644-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 cp ha-747644-m04:/home/docker/cp-test.txt ha-747644-m03:/home/docker/cp-test_ha-747644-m04_ha-747644-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 ssh -n ha-747644-m03 "sudo cat /home/docker/cp-test_ha-747644-m04_ha-747644-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (3.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-747644 node stop m02 --alsologtostderr -v 5: (2.370748535s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-747644 status --alsologtostderr -v 5: exit status 7 (792.469871ms)

                                                
                                                
-- stdout --
	ha-747644
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-747644-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-747644-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-747644-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:57:20.546846  564036 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:57:20.546972  564036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:57:20.546984  564036 out.go:374] Setting ErrFile to fd 2...
	I1206 10:57:20.546989  564036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:57:20.547278  564036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 10:57:20.547469  564036 out.go:368] Setting JSON to false
	I1206 10:57:20.547512  564036 mustload.go:66] Loading cluster: ha-747644
	I1206 10:57:20.547581  564036 notify.go:221] Checking for updates...
	I1206 10:57:20.548799  564036 config.go:182] Loaded profile config "ha-747644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:57:20.548836  564036 status.go:174] checking status of ha-747644 ...
	I1206 10:57:20.549520  564036 cli_runner.go:164] Run: docker container inspect ha-747644 --format={{.State.Status}}
	I1206 10:57:20.572356  564036 status.go:371] ha-747644 host status = "Running" (err=<nil>)
	I1206 10:57:20.572386  564036 host.go:66] Checking if "ha-747644" exists ...
	I1206 10:57:20.572810  564036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-747644
	I1206 10:57:20.598349  564036 host.go:66] Checking if "ha-747644" exists ...
	I1206 10:57:20.598696  564036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:57:20.598748  564036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-747644
	I1206 10:57:20.620469  564036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/ha-747644/id_rsa Username:docker}
	I1206 10:57:20.724632  564036 ssh_runner.go:195] Run: systemctl --version
	I1206 10:57:20.731413  564036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:57:20.744737  564036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:57:20.821890  564036 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-06 10:57:20.81163692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:57:20.822611  564036 kubeconfig.go:125] found "ha-747644" server: "https://192.168.49.254:8443"
	I1206 10:57:20.822652  564036 api_server.go:166] Checking apiserver status ...
	I1206 10:57:20.822710  564036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:57:20.835290  564036 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1266/cgroup
	I1206 10:57:20.844641  564036 api_server.go:182] apiserver freezer: "5:freezer:/docker/ed7d5065d7e9d9b1e6fb2072c19ce863ceed104ddcd99cbee3b9d6d8894f4945/crio/crio-520a2cbd547532ca7615e7917d93cb359b72331d04465228bc0a1020994a119b"
	I1206 10:57:20.844712  564036 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ed7d5065d7e9d9b1e6fb2072c19ce863ceed104ddcd99cbee3b9d6d8894f4945/crio/crio-520a2cbd547532ca7615e7917d93cb359b72331d04465228bc0a1020994a119b/freezer.state
	I1206 10:57:20.852669  564036 api_server.go:204] freezer state: "THAWED"
	I1206 10:57:20.852696  564036 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1206 10:57:20.861359  564036 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1206 10:57:20.861395  564036 status.go:463] ha-747644 apiserver status = Running (err=<nil>)
	I1206 10:57:20.861413  564036 status.go:176] ha-747644 status: &{Name:ha-747644 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 10:57:20.861439  564036 status.go:174] checking status of ha-747644-m02 ...
	I1206 10:57:20.861790  564036 cli_runner.go:164] Run: docker container inspect ha-747644-m02 --format={{.State.Status}}
	I1206 10:57:20.880907  564036 status.go:371] ha-747644-m02 host status = "Stopped" (err=<nil>)
	I1206 10:57:20.880933  564036 status.go:384] host is not running, skipping remaining checks
	I1206 10:57:20.880940  564036 status.go:176] ha-747644-m02 status: &{Name:ha-747644-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 10:57:20.880970  564036 status.go:174] checking status of ha-747644-m03 ...
	I1206 10:57:20.881287  564036 cli_runner.go:164] Run: docker container inspect ha-747644-m03 --format={{.State.Status}}
	I1206 10:57:20.898714  564036 status.go:371] ha-747644-m03 host status = "Running" (err=<nil>)
	I1206 10:57:20.898749  564036 host.go:66] Checking if "ha-747644-m03" exists ...
	I1206 10:57:20.899060  564036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-747644-m03
	I1206 10:57:20.916514  564036 host.go:66] Checking if "ha-747644-m03" exists ...
	I1206 10:57:20.916837  564036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:57:20.916882  564036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-747644-m03
	I1206 10:57:20.934306  564036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/ha-747644-m03/id_rsa Username:docker}
	I1206 10:57:21.037279  564036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:57:21.053396  564036 kubeconfig.go:125] found "ha-747644" server: "https://192.168.49.254:8443"
	I1206 10:57:21.053429  564036 api_server.go:166] Checking apiserver status ...
	I1206 10:57:21.053469  564036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:57:21.065663  564036 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1188/cgroup
	I1206 10:57:21.074616  564036 api_server.go:182] apiserver freezer: "5:freezer:/docker/0eb4a002d3a8b8534a1c675e166dda336ffa128d263f6a5e5d9aaeb7ab6cebb2/crio/crio-2ff818fd7d6bd829974746baacd704faecb338b267dd3d974c7bc4644e4370d8"
	I1206 10:57:21.074699  564036 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0eb4a002d3a8b8534a1c675e166dda336ffa128d263f6a5e5d9aaeb7ab6cebb2/crio/crio-2ff818fd7d6bd829974746baacd704faecb338b267dd3d974c7bc4644e4370d8/freezer.state
	I1206 10:57:21.082794  564036 api_server.go:204] freezer state: "THAWED"
	I1206 10:57:21.082824  564036 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1206 10:57:21.091185  564036 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1206 10:57:21.091212  564036 status.go:463] ha-747644-m03 apiserver status = Running (err=<nil>)
	I1206 10:57:21.091221  564036 status.go:176] ha-747644-m03 status: &{Name:ha-747644-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 10:57:21.091237  564036 status.go:174] checking status of ha-747644-m04 ...
	I1206 10:57:21.091533  564036 cli_runner.go:164] Run: docker container inspect ha-747644-m04 --format={{.State.Status}}
	I1206 10:57:21.114708  564036 status.go:371] ha-747644-m04 host status = "Running" (err=<nil>)
	I1206 10:57:21.114738  564036 host.go:66] Checking if "ha-747644-m04" exists ...
	I1206 10:57:21.115046  564036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-747644-m04
	I1206 10:57:21.134757  564036 host.go:66] Checking if "ha-747644-m04" exists ...
	I1206 10:57:21.135262  564036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:57:21.135320  564036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-747644-m04
	I1206 10:57:21.157047  564036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/ha-747644-m04/id_rsa Username:docker}
	I1206 10:57:21.268543  564036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:57:21.282162  564036 status.go:176] ha-747644-m04 status: &{Name:ha-747644-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (3.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (31.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-747644 node start m02 --alsologtostderr -v 5: (29.73084262s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-747644 status --alsologtostderr -v 5: (1.29131105s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (31.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.408381435s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (117.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 stop --alsologtostderr -v 5
E1206 10:58:15.605155  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:58:16.325410  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-747644 stop --alsologtostderr -v 5: (27.49133472s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 start --wait true --alsologtostderr -v 5
E1206 10:58:35.937203  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:58:43.308533  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-747644 start --wait true --alsologtostderr -v 5: (1m29.741157084s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (117.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-747644 node delete m03 --alsologtostderr -v 5: (10.546387621s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 stop --alsologtostderr -v 5
E1206 11:00:13.256567  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-747644 stop --alsologtostderr -v 5: (36.157743472s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-747644 status --alsologtostderr -v 5: exit status 7 (126.147282ms)

                                                
                                                
-- stdout --
	ha-747644
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-747644-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-747644-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:00:40.677075  575637 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:00:40.677280  575637 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:00:40.677307  575637 out.go:374] Setting ErrFile to fd 2...
	I1206 11:00:40.677327  575637 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:00:40.677633  575637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 11:00:40.677869  575637 out.go:368] Setting JSON to false
	I1206 11:00:40.677921  575637 mustload.go:66] Loading cluster: ha-747644
	I1206 11:00:40.678429  575637 config.go:182] Loaded profile config "ha-747644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:00:40.678484  575637 status.go:174] checking status of ha-747644 ...
	I1206 11:00:40.677982  575637 notify.go:221] Checking for updates...
	I1206 11:00:40.679619  575637 cli_runner.go:164] Run: docker container inspect ha-747644 --format={{.State.Status}}
	I1206 11:00:40.699077  575637 status.go:371] ha-747644 host status = "Stopped" (err=<nil>)
	I1206 11:00:40.699105  575637 status.go:384] host is not running, skipping remaining checks
	I1206 11:00:40.699112  575637 status.go:176] ha-747644 status: &{Name:ha-747644 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 11:00:40.699242  575637 status.go:174] checking status of ha-747644-m02 ...
	I1206 11:00:40.699570  575637 cli_runner.go:164] Run: docker container inspect ha-747644-m02 --format={{.State.Status}}
	I1206 11:00:40.730361  575637 status.go:371] ha-747644-m02 host status = "Stopped" (err=<nil>)
	I1206 11:00:40.730386  575637 status.go:384] host is not running, skipping remaining checks
	I1206 11:00:40.730394  575637 status.go:176] ha-747644-m02 status: &{Name:ha-747644-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 11:00:40.730413  575637 status.go:174] checking status of ha-747644-m04 ...
	I1206 11:00:40.730766  575637 cli_runner.go:164] Run: docker container inspect ha-747644-m04 --format={{.State.Status}}
	I1206 11:00:40.750505  575637 status.go:371] ha-747644-m04 host status = "Stopped" (err=<nil>)
	I1206 11:00:40.750530  575637 status.go:384] host is not running, skipping remaining checks
	I1206 11:00:40.750538  575637 status.go:176] ha-747644-m04 status: &{Name:ha-747644-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (62.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-747644 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m1.905971182s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (62.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (83.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-747644 node add --control-plane --alsologtostderr -v 5: (1m22.266255747s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-747644 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-747644 status --alsologtostderr -v 5: (1.085550365s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (83.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.107344371s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.52s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-767261 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1206 11:03:35.937667  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-767261 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m15.508912377s)
--- PASS: TestJSONOutput/start/Command (75.52s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-767261 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-767261 --output=json --user=testUser: (5.823057676s)
--- PASS: TestJSONOutput/stop/Command (5.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-893427 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-893427 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (106.820605ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"09372281-4f04-400c-9c5c-35990590ee78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-893427] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ddb0b72d-237b-4807-9ade-377aa606e5e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22049"}}
	{"specversion":"1.0","id":"d453f887-77dc-4eb7-a474-0aaa8c341058","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e459e25d-8598-47a5-82af-51d4d2087de9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig"}}
	{"specversion":"1.0","id":"4f4ec64d-f269-47ab-8151-43ff26d526b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube"}}
	{"specversion":"1.0","id":"2b917eac-a547-4e11-9127-5a587e927376","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"30c6f5f4-665b-463e-a2aa-4924d5083f6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a71957fd-868d-4e36-b8d7-7011b9abf5a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-893427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-893427
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.75s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-767489 --network=
E1206 11:05:13.259324  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-767489 --network=: (34.477256681s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-767489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-767489
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-767489: (2.240605422s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.75s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.45s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-347392 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-347392 --network=bridge: (33.278057165s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-347392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-347392
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-347392: (2.15041903s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.45s)

                                                
                                    
x
+
TestKicExistingNetwork (33.26s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1206 11:06:00.441709  488068 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1206 11:06:00.459624  488068 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1206 11:06:00.460548  488068 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1206 11:06:00.460584  488068 cli_runner.go:164] Run: docker network inspect existing-network
W1206 11:06:00.477221  488068 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1206 11:06:00.477253  488068 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1206 11:06:00.477268  488068 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1206 11:06:00.477394  488068 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1206 11:06:00.497254  488068 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-194638dca10b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:a6:03:7b:5f:e6} reservation:<nil>}
I1206 11:06:00.497713  488068 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001867cc0}
I1206 11:06:00.497741  488068 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1206 11:06:00.497796  488068 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1206 11:06:00.559544  488068 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-572570 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-572570 --network=existing-network: (30.942672636s)
helpers_test.go:175: Cleaning up "existing-network-572570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-572570
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-572570: (2.155789177s)
I1206 11:06:33.675324  488068 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (33.26s)

                                                
                                    
x
+
TestKicCustomSubnet (38.56s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-411470 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-411470 --subnet=192.168.60.0/24: (36.298938249s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-411470 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-411470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-411470
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-411470: (2.233302411s)
--- PASS: TestKicCustomSubnet (38.56s)

                                                
                                    
x
+
TestKicStaticIP (36.69s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-035303 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-035303 --static-ip=192.168.200.200: (34.272239936s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-035303 ip
helpers_test.go:175: Cleaning up "static-ip-035303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-035303
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-035303: (2.25545728s)
--- PASS: TestKicStaticIP (36.69s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (69.65s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-597618 --driver=docker  --container-runtime=crio
E1206 11:08:15.605121  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:08:19.008033  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-597618 --driver=docker  --container-runtime=crio: (33.280841119s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-600515 --driver=docker  --container-runtime=crio
E1206 11:08:35.937434  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-600515 --driver=docker  --container-runtime=crio: (30.698841102s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-597618
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-600515
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-600515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-600515
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-600515: (2.183504702s)
helpers_test.go:175: Cleaning up "first-597618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-597618
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-597618: (2.03359951s)
--- PASS: TestMinikubeProfile (69.65s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.07s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-101597 --memory=3072 --mount-string /tmp/TestMountStartserial2855191111/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-101597 --memory=3072 --mount-string /tmp/TestMountStartserial2855191111/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.070585181s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-101597 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-103705 --memory=3072 --mount-string /tmp/TestMountStartserial2855191111/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-103705 --memory=3072 --mount-string /tmp/TestMountStartserial2855191111/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.889665912s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-103705 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-101597 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-101597 --alsologtostderr -v=5: (1.742168136s)
--- PASS: TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-103705 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-103705
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-103705: (1.298066696s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.99s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-103705
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-103705: (6.992352929s)
--- PASS: TestMountStart/serial/RestartStopped (7.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-103705 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (139.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-541280 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1206 11:09:38.672121  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:10:13.255260  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-541280 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m19.268273324s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (139.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541280 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541280 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-541280 -- rollout status deployment/busybox: (3.408226609s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541280 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541280 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541280 -- exec busybox-7b57f96db7-5f8zx -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541280 -- exec busybox-7b57f96db7-6d6q4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541280 -- exec busybox-7b57f96db7-5f8zx -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541280 -- exec busybox-7b57f96db7-6d6q4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541280 -- exec busybox-7b57f96db7-5f8zx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541280 -- exec busybox-7b57f96db7-6d6q4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541280 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541280 -- exec busybox-7b57f96db7-5f8zx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541280 -- exec busybox-7b57f96db7-5f8zx -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541280 -- exec busybox-7b57f96db7-6d6q4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541280 -- exec busybox-7b57f96db7-6d6q4 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-541280 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-541280 -v=5 --alsologtostderr: (56.053632468s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (56.76s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-541280 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 cp testdata/cp-test.txt multinode-541280:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 ssh -n multinode-541280 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 cp multinode-541280:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2155070728/001/cp-test_multinode-541280.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 ssh -n multinode-541280 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 cp multinode-541280:/home/docker/cp-test.txt multinode-541280-m02:/home/docker/cp-test_multinode-541280_multinode-541280-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 ssh -n multinode-541280 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 ssh -n multinode-541280-m02 "sudo cat /home/docker/cp-test_multinode-541280_multinode-541280-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 cp multinode-541280:/home/docker/cp-test.txt multinode-541280-m03:/home/docker/cp-test_multinode-541280_multinode-541280-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 ssh -n multinode-541280 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 ssh -n multinode-541280-m03 "sudo cat /home/docker/cp-test_multinode-541280_multinode-541280-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 cp testdata/cp-test.txt multinode-541280-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 ssh -n multinode-541280-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 cp multinode-541280-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2155070728/001/cp-test_multinode-541280-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 ssh -n multinode-541280-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 cp multinode-541280-m02:/home/docker/cp-test.txt multinode-541280:/home/docker/cp-test_multinode-541280-m02_multinode-541280.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 ssh -n multinode-541280-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 ssh -n multinode-541280 "sudo cat /home/docker/cp-test_multinode-541280-m02_multinode-541280.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 cp multinode-541280-m02:/home/docker/cp-test.txt multinode-541280-m03:/home/docker/cp-test_multinode-541280-m02_multinode-541280-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 ssh -n multinode-541280-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 ssh -n multinode-541280-m03 "sudo cat /home/docker/cp-test_multinode-541280-m02_multinode-541280-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 cp testdata/cp-test.txt multinode-541280-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 ssh -n multinode-541280-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 cp multinode-541280-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2155070728/001/cp-test_multinode-541280-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 ssh -n multinode-541280-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 cp multinode-541280-m03:/home/docker/cp-test.txt multinode-541280:/home/docker/cp-test_multinode-541280-m03_multinode-541280.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 ssh -n multinode-541280-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 ssh -n multinode-541280 "sudo cat /home/docker/cp-test_multinode-541280-m03_multinode-541280.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 cp multinode-541280-m03:/home/docker/cp-test.txt multinode-541280-m02:/home/docker/cp-test_multinode-541280-m03_multinode-541280-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 ssh -n multinode-541280-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 ssh -n multinode-541280-m02 "sudo cat /home/docker/cp-test_multinode-541280-m03_multinode-541280-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-541280 node stop m03: (1.313595155s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-541280 status: exit status 7 (567.875126ms)

                                                
                                                
-- stdout --
	multinode-541280
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-541280-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-541280-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-541280 status --alsologtostderr: exit status 7 (550.367129ms)

                                                
                                                
-- stdout --
	multinode-541280
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-541280-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-541280-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:13:07.219501  625979 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:13:07.219686  625979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:13:07.219718  625979 out.go:374] Setting ErrFile to fd 2...
	I1206 11:13:07.219742  625979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:13:07.220017  625979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 11:13:07.220236  625979 out.go:368] Setting JSON to false
	I1206 11:13:07.220303  625979 mustload.go:66] Loading cluster: multinode-541280
	I1206 11:13:07.220377  625979 notify.go:221] Checking for updates...
	I1206 11:13:07.221831  625979 config.go:182] Loaded profile config "multinode-541280": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:13:07.221884  625979 status.go:174] checking status of multinode-541280 ...
	I1206 11:13:07.222742  625979 cli_runner.go:164] Run: docker container inspect multinode-541280 --format={{.State.Status}}
	I1206 11:13:07.243345  625979 status.go:371] multinode-541280 host status = "Running" (err=<nil>)
	I1206 11:13:07.243370  625979 host.go:66] Checking if "multinode-541280" exists ...
	I1206 11:13:07.243755  625979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-541280
	I1206 11:13:07.278522  625979 host.go:66] Checking if "multinode-541280" exists ...
	I1206 11:13:07.278978  625979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:13:07.279054  625979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-541280
	I1206 11:13:07.298842  625979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33308 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/multinode-541280/id_rsa Username:docker}
	I1206 11:13:07.404945  625979 ssh_runner.go:195] Run: systemctl --version
	I1206 11:13:07.411709  625979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:13:07.425180  625979 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:13:07.486347  625979 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-06 11:13:07.475816697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:13:07.487242  625979 kubeconfig.go:125] found "multinode-541280" server: "https://192.168.67.2:8443"
	I1206 11:13:07.487285  625979 api_server.go:166] Checking apiserver status ...
	I1206 11:13:07.487333  625979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:13:07.499550  625979 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1257/cgroup
	I1206 11:13:07.508422  625979 api_server.go:182] apiserver freezer: "5:freezer:/docker/d9db2bf954057e28ef27a7d98bb351dfb8861b7cc18cd84ba50600cb64cdb004/crio/crio-75d4f23784f279078d49f3bb97baa347bc6ad17a094ed7c579a78a82c3cc3479"
	I1206 11:13:07.508496  625979 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d9db2bf954057e28ef27a7d98bb351dfb8861b7cc18cd84ba50600cb64cdb004/crio/crio-75d4f23784f279078d49f3bb97baa347bc6ad17a094ed7c579a78a82c3cc3479/freezer.state
	I1206 11:13:07.516472  625979 api_server.go:204] freezer state: "THAWED"
	I1206 11:13:07.516503  625979 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1206 11:13:07.524563  625979 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1206 11:13:07.524592  625979 status.go:463] multinode-541280 apiserver status = Running (err=<nil>)
	I1206 11:13:07.524610  625979 status.go:176] multinode-541280 status: &{Name:multinode-541280 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 11:13:07.524638  625979 status.go:174] checking status of multinode-541280-m02 ...
	I1206 11:13:07.524952  625979 cli_runner.go:164] Run: docker container inspect multinode-541280-m02 --format={{.State.Status}}
	I1206 11:13:07.542350  625979 status.go:371] multinode-541280-m02 host status = "Running" (err=<nil>)
	I1206 11:13:07.542374  625979 host.go:66] Checking if "multinode-541280-m02" exists ...
	I1206 11:13:07.542687  625979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-541280-m02
	I1206 11:13:07.562180  625979 host.go:66] Checking if "multinode-541280-m02" exists ...
	I1206 11:13:07.562507  625979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:13:07.562549  625979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-541280-m02
	I1206 11:13:07.580496  625979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33313 SSHKeyPath:/home/jenkins/minikube-integration/22049-484819/.minikube/machines/multinode-541280-m02/id_rsa Username:docker}
	I1206 11:13:07.684494  625979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:13:07.698465  625979 status.go:176] multinode-541280-m02 status: &{Name:multinode-541280-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1206 11:13:07.698499  625979 status.go:174] checking status of multinode-541280-m03 ...
	I1206 11:13:07.698815  625979 cli_runner.go:164] Run: docker container inspect multinode-541280-m03 --format={{.State.Status}}
	I1206 11:13:07.716487  625979 status.go:371] multinode-541280-m03 host status = "Stopped" (err=<nil>)
	I1206 11:13:07.716513  625979 status.go:384] host is not running, skipping remaining checks
	I1206 11:13:07.716520  625979 status.go:176] multinode-541280-m03 status: &{Name:multinode-541280-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.43s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-541280 node start m03 -v=5 --alsologtostderr: (7.461335998s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 status -v=5 --alsologtostderr
E1206 11:13:15.605996  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-541280
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-541280
E1206 11:13:35.937258  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-541280: (25.118910091s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-541280 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-541280 --wait=true -v=5 --alsologtostderr: (54.078289487s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-541280
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-541280 node delete m03: (5.060593139s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 stop
E1206 11:14:56.327197  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-541280 stop: (23.848187872s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-541280 status: exit status 7 (95.174783ms)

                                                
                                                
-- stdout --
	multinode-541280
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-541280-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-541280 status --alsologtostderr: exit status 7 (101.779368ms)

                                                
                                                
-- stdout --
	multinode-541280
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-541280-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:15:05.076332  633823 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:15:05.076449  633823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:15:05.076459  633823 out.go:374] Setting ErrFile to fd 2...
	I1206 11:15:05.076465  633823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:15:05.076750  633823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 11:15:05.076942  633823 out.go:368] Setting JSON to false
	I1206 11:15:05.076971  633823 mustload.go:66] Loading cluster: multinode-541280
	I1206 11:15:05.077025  633823 notify.go:221] Checking for updates...
	I1206 11:15:05.077381  633823 config.go:182] Loaded profile config "multinode-541280": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:15:05.077401  633823 status.go:174] checking status of multinode-541280 ...
	I1206 11:15:05.077904  633823 cli_runner.go:164] Run: docker container inspect multinode-541280 --format={{.State.Status}}
	I1206 11:15:05.098035  633823 status.go:371] multinode-541280 host status = "Stopped" (err=<nil>)
	I1206 11:15:05.098058  633823 status.go:384] host is not running, skipping remaining checks
	I1206 11:15:05.098066  633823 status.go:176] multinode-541280 status: &{Name:multinode-541280 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 11:15:05.098103  633823 status.go:174] checking status of multinode-541280-m02 ...
	I1206 11:15:05.098412  633823 cli_runner.go:164] Run: docker container inspect multinode-541280-m02 --format={{.State.Status}}
	I1206 11:15:05.127040  633823 status.go:371] multinode-541280-m02 host status = "Stopped" (err=<nil>)
	I1206 11:15:05.127067  633823 status.go:384] host is not running, skipping remaining checks
	I1206 11:15:05.127162  633823 status.go:176] multinode-541280-m02 status: &{Name:multinode-541280-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-541280 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1206 11:15:13.255092  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-541280 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (48.460670168s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541280 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.16s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-541280
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-541280-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-541280-m02 --driver=docker  --container-runtime=crio: exit status 14 (100.086189ms)

                                                
                                                
-- stdout --
	* [multinode-541280-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-541280-m02' is duplicated with machine name 'multinode-541280-m02' in profile 'multinode-541280'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-541280-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-541280-m03 --driver=docker  --container-runtime=crio: (33.023128801s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-541280
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-541280: exit status 80 (358.494522ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-541280 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-541280-m03 already exists in multinode-541280-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-541280-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-541280-m03: (2.092451334s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.63s)

                                                
                                    
x
+
TestPreload (120.31s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-558504 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-558504 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m1.602812764s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-558504 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-558504 image pull gcr.io/k8s-minikube/busybox: (2.345843605s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-558504
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-558504: (5.996273045s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-558504 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1206 11:18:15.605302  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-558504 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (47.654820556s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-558504 image list
helpers_test.go:175: Cleaning up "test-preload-558504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-558504
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-558504: (2.463802357s)
--- PASS: TestPreload (120.31s)

                                                
                                    
x
+
TestScheduledStopUnix (112.37s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-881545 --memory=3072 --driver=docker  --container-runtime=crio
E1206 11:18:35.937027  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-881545 --memory=3072 --driver=docker  --container-runtime=crio: (36.083532951s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-881545 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 11:19:10.819447  647939 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:19:10.819587  647939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:19:10.819599  647939 out.go:374] Setting ErrFile to fd 2...
	I1206 11:19:10.819604  647939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:19:10.819863  647939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 11:19:10.820100  647939 out.go:368] Setting JSON to false
	I1206 11:19:10.820212  647939 mustload.go:66] Loading cluster: scheduled-stop-881545
	I1206 11:19:10.820586  647939 config.go:182] Loaded profile config "scheduled-stop-881545": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:19:10.820710  647939 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/scheduled-stop-881545/config.json ...
	I1206 11:19:10.820910  647939 mustload.go:66] Loading cluster: scheduled-stop-881545
	I1206 11:19:10.821036  647939 config.go:182] Loaded profile config "scheduled-stop-881545": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-881545 -n scheduled-stop-881545
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-881545 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 11:19:11.298764  648029 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:19:11.298989  648029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:19:11.299021  648029 out.go:374] Setting ErrFile to fd 2...
	I1206 11:19:11.299044  648029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:19:11.299383  648029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 11:19:11.299670  648029 out.go:368] Setting JSON to false
	I1206 11:19:11.300738  648029 daemonize_unix.go:73] killing process 647954 as it is an old scheduled stop
	I1206 11:19:11.305175  648029 mustload.go:66] Loading cluster: scheduled-stop-881545
	I1206 11:19:11.305656  648029 config.go:182] Loaded profile config "scheduled-stop-881545": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:19:11.305784  648029 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/scheduled-stop-881545/config.json ...
	I1206 11:19:11.306011  648029 mustload.go:66] Loading cluster: scheduled-stop-881545
	I1206 11:19:11.306209  648029 config.go:182] Loaded profile config "scheduled-stop-881545": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1206 11:19:11.311868  488068 retry.go:31] will retry after 132.885µs: open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/scheduled-stop-881545/pid: no such file or directory
I1206 11:19:11.312992  488068 retry.go:31] will retry after 157.75µs: open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/scheduled-stop-881545/pid: no such file or directory
I1206 11:19:11.314045  488068 retry.go:31] will retry after 328.883µs: open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/scheduled-stop-881545/pid: no such file or directory
I1206 11:19:11.315104  488068 retry.go:31] will retry after 246.287µs: open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/scheduled-stop-881545/pid: no such file or directory
I1206 11:19:11.316224  488068 retry.go:31] will retry after 468.33µs: open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/scheduled-stop-881545/pid: no such file or directory
I1206 11:19:11.317316  488068 retry.go:31] will retry after 943.293µs: open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/scheduled-stop-881545/pid: no such file or directory
I1206 11:19:11.318420  488068 retry.go:31] will retry after 1.451005ms: open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/scheduled-stop-881545/pid: no such file or directory
I1206 11:19:11.320594  488068 retry.go:31] will retry after 1.734479ms: open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/scheduled-stop-881545/pid: no such file or directory
I1206 11:19:11.322790  488068 retry.go:31] will retry after 3.360907ms: open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/scheduled-stop-881545/pid: no such file or directory
I1206 11:19:11.326985  488068 retry.go:31] will retry after 2.915976ms: open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/scheduled-stop-881545/pid: no such file or directory
I1206 11:19:11.330200  488068 retry.go:31] will retry after 5.804745ms: open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/scheduled-stop-881545/pid: no such file or directory
I1206 11:19:11.336413  488068 retry.go:31] will retry after 6.232409ms: open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/scheduled-stop-881545/pid: no such file or directory
I1206 11:19:11.343634  488068 retry.go:31] will retry after 10.253483ms: open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/scheduled-stop-881545/pid: no such file or directory
I1206 11:19:11.354879  488068 retry.go:31] will retry after 25.735025ms: open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/scheduled-stop-881545/pid: no such file or directory
I1206 11:19:11.381113  488068 retry.go:31] will retry after 38.339458ms: open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/scheduled-stop-881545/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-881545 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-881545 -n scheduled-stop-881545
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-881545
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-881545 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 11:19:37.259163  648391 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:19:37.259306  648391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:19:37.259318  648391 out.go:374] Setting ErrFile to fd 2...
	I1206 11:19:37.259324  648391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:19:37.259582  648391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 11:19:37.259831  648391 out.go:368] Setting JSON to false
	I1206 11:19:37.259931  648391 mustload.go:66] Loading cluster: scheduled-stop-881545
	I1206 11:19:37.260326  648391 config.go:182] Loaded profile config "scheduled-stop-881545": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:19:37.260403  648391 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/scheduled-stop-881545/config.json ...
	I1206 11:19:37.260610  648391 mustload.go:66] Loading cluster: scheduled-stop-881545
	I1206 11:19:37.260731  648391 config.go:182] Loaded profile config "scheduled-stop-881545": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1206 11:20:13.255391  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-881545
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-881545: exit status 7 (67.521161ms)

                                                
                                                
-- stdout --
	scheduled-stop-881545
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-881545 -n scheduled-stop-881545
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-881545 -n scheduled-stop-881545: exit status 7 (74.636771ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-881545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-881545
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-881545: (4.618968315s)
--- PASS: TestScheduledStopUnix (112.37s)

                                                
                                    
x
+
TestInsufficientStorage (13.26s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-991250 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-991250 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.652436569s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ce6d29f2-9036-4380-a78e-4bddbc06f5e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-991250] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f77cf951-bf6c-4d9d-aaf4-c826abd06ab8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22049"}}
	{"specversion":"1.0","id":"7d626894-75d9-4b3b-a962-752b745c9d94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"77aba5f2-a422-4846-ad61-cc125dee5e8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig"}}
	{"specversion":"1.0","id":"7ac13f33-ef22-496e-bd3a-82cf053fd954","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube"}}
	{"specversion":"1.0","id":"1df0e639-7e6e-48c8-92e6-d80cb1d53482","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7a6095ea-21de-49d2-9dd3-72c905ba6d4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5a5fc9b5-c431-4a11-9890-705e52e3f0fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d89b7cbc-f917-416a-833a-c088918cf3df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c25578dd-4b07-4a09-b9b9-24ce4e710106","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2564bacf-5dd5-479e-b2ba-0689028e257a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"36ebe894-0290-4887-95dc-d99b7c79b1e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-991250\" primary control-plane node in \"insufficient-storage-991250\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0645b7da-ef73-41ba-8964-d433502fcc8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764843390-22032 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e6298dcc-bb08-4d71-af77-3f731d44bfb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9d0c38ca-bccc-4176-8efe-18a6e52b87a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-991250 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-991250 --output=json --layout=cluster: exit status 7 (302.347766ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-991250","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-991250","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 11:20:37.982198  650106 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-991250" does not appear in /home/jenkins/minikube-integration/22049-484819/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-991250 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-991250 --output=json --layout=cluster: exit status 7 (324.189485ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-991250","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-991250","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 11:20:38.307247  650175 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-991250" does not appear in /home/jenkins/minikube-integration/22049-484819/kubeconfig
	E1206 11:20:38.317774  650175 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/insufficient-storage-991250/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-991250" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-991250
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-991250: (1.979329675s)
--- PASS: TestInsufficientStorage (13.26s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (304.53s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2860867516 start -p running-upgrade-684602 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2860867516 start -p running-upgrade-684602 --memory=3072 --vm-driver=docker  --container-runtime=crio: (34.757245669s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-684602 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1206 11:30:13.255283  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:31:36.329480  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:33:15.604984  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:33:35.937330  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-684602 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.743426393s)
helpers_test.go:175: Cleaning up "running-upgrade-684602" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-684602
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-684602: (1.992409709s)
--- PASS: TestRunningBinaryUpgrade (304.53s)

                                                
                                    
x
+
TestMissingContainerUpgrade (115.78s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3805627654 start -p missing-upgrade-707485 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3805627654 start -p missing-upgrade-707485 --memory=3072 --driver=docker  --container-runtime=crio: (1m3.99772715s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-707485
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-707485
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-707485 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1206 11:28:15.606558  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:28:35.937517  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-707485 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.323995371s)
helpers_test.go:175: Cleaning up "missing-upgrade-707485" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-707485
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-707485: (2.335582495s)
--- PASS: TestMissingContainerUpgrade (115.78s)

                                                
                                    
x
+
TestPause/serial/Start (89.34s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-362686 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-362686 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m29.339645372s)
--- PASS: TestPause/serial/Start (89.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-858438 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-858438 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (125.538498ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-858438] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-858438 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-858438 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.531512715s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-858438 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (13.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-858438 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-858438 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (11.318519441s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-858438 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-858438 status -o json: exit status 2 (348.667478ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-858438","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-858438
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-858438: (2.039693826s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (13.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-858438 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-858438 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.364801515s)
--- PASS: TestNoKubernetes/serial/Start (8.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22049-484819/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-858438 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-858438 "sudo systemctl is-active --quiet service kubelet": exit status 1 (286.931155ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-858438
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-858438: (1.320460291s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-858438 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-858438 --driver=docker  --container-runtime=crio: (7.126672054s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-858438 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-858438 "sudo systemctl is-active --quiet service kubelet": exit status 1 (315.936801ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-334090 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-334090 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (196.42583ms)

                                                
                                                
-- stdout --
	* [false-334090] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:22:01.405180  659675 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:22:01.405410  659675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:22:01.405423  659675 out.go:374] Setting ErrFile to fd 2...
	I1206 11:22:01.405429  659675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:22:01.405714  659675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-484819/.minikube/bin
	I1206 11:22:01.406205  659675 out.go:368] Setting JSON to false
	I1206 11:22:01.407318  659675 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":14673,"bootTime":1765005449,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1206 11:22:01.407398  659675 start.go:143] virtualization:  
	I1206 11:22:01.411076  659675 out.go:179] * [false-334090] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:22:01.414977  659675 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 11:22:01.415152  659675 notify.go:221] Checking for updates...
	I1206 11:22:01.420974  659675 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:22:01.423961  659675 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-484819/kubeconfig
	I1206 11:22:01.426803  659675 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-484819/.minikube
	I1206 11:22:01.429712  659675 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:22:01.432562  659675 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:22:01.435855  659675 config.go:182] Loaded profile config "pause-362686": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:22:01.436044  659675 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:22:01.465307  659675 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:22:01.465449  659675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:22:01.527502  659675 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:22:01.517819732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:22:01.527618  659675 docker.go:319] overlay module found
	I1206 11:22:01.532632  659675 out.go:179] * Using the docker driver based on user configuration
	I1206 11:22:01.535568  659675 start.go:309] selected driver: docker
	I1206 11:22:01.535593  659675 start.go:927] validating driver "docker" against <nil>
	I1206 11:22:01.535608  659675 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:22:01.539170  659675 out.go:203] 
	W1206 11:22:01.542204  659675 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1206 11:22:01.545167  659675 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-334090 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-334090

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-334090

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-334090

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-334090

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-334090

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-334090

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-334090

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-334090

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-334090

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-334090

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-334090

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-334090" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-334090" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 11:21:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-362686
contexts:
- context:
cluster: pause-362686
extensions:
- extension:
last-update: Sat, 06 Dec 2025 11:21:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-362686
name: pause-362686
current-context: pause-362686
kind: Config
preferences: {}
users:
- name: pause-362686
user:
client-certificate: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/client.crt
client-key: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-334090

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334090"

                                                
                                                
----------------------- debugLogs end: false-334090 [took: 3.428607569s] --------------------------------
helpers_test.go:175: Cleaning up "false-334090" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-334090
--- PASS: TestNetworkPlugins/group/false (3.80s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.19s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-362686 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-362686 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.157673848s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (303.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1281663385 start -p stopped-upgrade-468509 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1281663385 start -p stopped-upgrade-468509 --memory=3072 --vm-driver=docker  --container-runtime=crio: (30.515417966s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1281663385 -p stopped-upgrade-468509 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1281663385 -p stopped-upgrade-468509 stop: (1.259942786s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-468509 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1206 11:35:13.255305  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-468509 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m31.370020831s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (303.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (84.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-334090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1206 11:38:15.605421  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-334090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m24.83969905s)
--- PASS: TestNetworkPlugins/group/auto/Start (84.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-334090 "pgrep -a kubelet"
I1206 11:38:33.445041  488068 config.go:182] Loaded profile config "auto-334090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-334090 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kpjzb" [52a7b47c-7213-4f0b-978b-8ee7ad878da5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1206 11:38:35.936992  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-kpjzb" [52a7b47c-7213-4f0b-978b-8ee7ad878da5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004007629s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-334090 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-334090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-334090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-468509
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-468509: (2.47155314s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-334090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-334090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m25.845290039s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (61.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-334090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-334090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m1.497769754s)
--- PASS: TestNetworkPlugins/group/calico/Start (61.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-z89vg" [26a5a005-a102-43ca-9e77-4d955e2661fe] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1206 11:40:13.255402  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-137526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003577358s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-334090 "pgrep -a kubelet"
I1206 11:40:19.109042  488068 config.go:182] Loaded profile config "calico-334090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-334090 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-42fdw" [7e21321f-2951-4f49-b8dd-d7ddb729cd7b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-42fdw" [7e21321f-2951-4f49-b8dd-d7ddb729cd7b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004646482s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-bfmfr" [5c0e69c3-ebfd-4841-b623-de49d5ee0536] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004405731s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-334090 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-334090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-334090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-334090 "pgrep -a kubelet"
I1206 11:40:33.945608  488068 config.go:182] Loaded profile config "kindnet-334090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-334090 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rp7zx" [294ac7c5-5413-4525-b095-1ff17506bf15] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rp7zx" [294ac7c5-5413-4525-b095-1ff17506bf15] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003663819s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-334090 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-334090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-334090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-334090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-334090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m4.267879777s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (77.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-334090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1206 11:41:39.012100  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-334090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m17.943585055s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (77.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-334090 "pgrep -a kubelet"
I1206 11:41:58.749508  488068 config.go:182] Loaded profile config "custom-flannel-334090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-334090 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4gtq4" [4e3237e7-5497-4280-bcad-89d628bb6dc9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4gtq4" [4e3237e7-5497-4280-bcad-89d628bb6dc9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004763887s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-334090 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-334090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-334090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-334090 "pgrep -a kubelet"
I1206 11:42:30.695518  488068 config.go:182] Loaded profile config "enable-default-cni-334090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-334090 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fc4nh" [3e760a76-dbc8-4b89-ba4e-ccad4afaac7b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fc4nh" [3e760a76-dbc8-4b89-ba4e-ccad4afaac7b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004540785s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-334090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-334090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m5.284337202s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-334090 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-334090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-334090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-334090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1206 11:43:15.605788  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/functional-123579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:43:33.779517  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/auto-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:43:33.786042  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/auto-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:43:33.797575  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/auto-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:43:33.818995  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/auto-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:43:33.860567  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/auto-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:43:33.941973  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/auto-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:43:34.103558  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/auto-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:43:34.425013  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/auto-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:43:35.066569  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/auto-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:43:35.937218  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/addons-463201/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:43:36.348051  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/auto-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-334090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m20.192314882s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-rcw5j" [5356ace0-d789-47c4-99f8-041b659c1312] Running
E1206 11:43:38.910073  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/auto-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003816593s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-334090 "pgrep -a kubelet"
E1206 11:43:44.032748  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/auto-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1206 11:43:44.352421  488068 config.go:182] Loaded profile config "flannel-334090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-334090 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7jq2j" [02fe4149-3a30-4544-a9d9-767f75a38b5f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7jq2j" [02fe4149-3a30-4544-a9d9-767f75a38b5f] Running
E1206 11:43:54.279838  488068 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/auto-334090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003688713s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-334090 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-334090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-334090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-334090 "pgrep -a kubelet"
I1206 11:44:28.878710  488068 config.go:182] Loaded profile config "bridge-334090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-334090 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pr6n7" [830a7e50-0646-49fc-b030-f762b455b806] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pr6n7" [830a7e50-0646-49fc-b030-f762b455b806] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003812435s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-334090 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-334090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-334090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    

Test skip (38/364)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.44
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
155 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
156 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
375 TestNetworkPlugins/group/kubenet 3.85
383 TestNetworkPlugins/group/cilium 4.24
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-001536 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-001536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-001536
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-334090 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-334090

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-334090

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-334090

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-334090

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-334090

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-334090

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-334090

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-334090

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-334090

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-334090

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-334090

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-334090" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-334090" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 11:21:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-362686
contexts:
- context:
cluster: pause-362686
extensions:
- extension:
last-update: Sat, 06 Dec 2025 11:21:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-362686
name: pause-362686
current-context: pause-362686
kind: Config
preferences: {}
users:
- name: pause-362686
user:
client-certificate: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/client.crt
client-key: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-334090

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334090"

                                                
                                                
----------------------- debugLogs end: kubenet-334090 [took: 3.682731685s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-334090" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-334090
--- SKIP: TestNetworkPlugins/group/kubenet (3.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-334090 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-334090

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-334090

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-334090

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-334090

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-334090

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-334090

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-334090

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-334090

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-334090

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-334090

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-334090

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-334090" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-334090

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-334090

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-334090

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-334090

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-334090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-334090" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22049-484819/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 11:21:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-362686
contexts:
- context:
cluster: pause-362686
extensions:
- extension:
last-update: Sat, 06 Dec 2025 11:21:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-362686
name: pause-362686
current-context: pause-362686
kind: Config
preferences: {}
users:
- name: pause-362686
user:
client-certificate: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/client.crt
client-key: /home/jenkins/minikube-integration/22049-484819/.minikube/profiles/pause-362686/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-334090

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-334090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334090"

                                                
                                                
----------------------- debugLogs end: cilium-334090 [took: 4.057379036s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-334090" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-334090
--- SKIP: TestNetworkPlugins/group/cilium (4.24s)

                                                
                                    
Copied to clipboard